Docstrings vs Comments - python

I'm a bit confused over the difference between docstrings and comments in python.
In my class my teacher introduced something known as a 'design recipe', a set of steps that will supposedly help us students plot and organize our coding better in Python. From what I understand, the below is an example of the steps we follow - this so call design recipe (the stuff in the quotations):
def term_work_mark(a0_mark, a1_mark, a2_mark, ex_mark, midterm_mark):
''' (float, float, float, float, float) -> float
Takes your marks on a0_mark, a1_mark, a2_mark, ex_mark and midterm_mark,
calculates their respective weight contributions and sums these
contributions to deliver your overall term mark out of a maximum of 55 (This
is because the exam mark is not taken account of in this function)
>>>term_work_mark(5, 5, 5, 5, 5)
11.8
>>>term_work_mark(0, 0, 0, 0, 0)
0.0
'''
a0_component = contribution(a0_mark, a0_max_mark, a0_weight)
a1_component = contribution(a1_mark, a1_max_mark, a1_weight)
a2_component = contribution(a2_mark, a2_max_mark, a2_weight)
ex_component = contribution(ex_mark, exercises_max_mark,exercises_weight)
mid_component = contribution(midterm_mark, midterm_max_mark, midterm_weight)
return (a0_component + a1_component + a2_component + ex_component +
mid_component)
As far as I understand this is basically a docstring, and in our version of a docstring it must include three things: a description, examples of what your function should do if you enter it in the python shell, and a 'type contract', a section that shows you what types you enter and what types the function will return.
Now this is all good and done, but our assignments require us to also have comments which explain the nature of our functions, using the token '#' symbol.
So, my question is, haven't I already explained what my function will do in the description section of the docstring? What's the point of adding comments if I'll essentially be telling the reader the exact same thing?

It appears your teacher is a fan of How to Design Programs ;)
I'd tackle this as writing for two different audiences who won't always overlap.
First there are the docstrings; these are for people who are going to be using your code without needing or wanting to know how it works. Docstrings can be turned into actual documentation. Consider the official Python documentation - What's available in each library and how to use it, no implementation details (Unless they directly relate to use)
Secondly there are in-code comments; these are to explain what is going on to people (generally you!) who want to extend the code. These will not normally be turned into documentation as they are really about the code itself rather than usage. Now there are about as many opinions on what makes for good comments (or lack thereof) as there are programmers. My personal rules of thumb for adding comments are to explain:
Parts of the code that are necessarily complex. (Optimisation comes to mind)
Workarounds for code you don't have control over, that may otherwise appear illogical
I'll admit to TODOs as well, though I try to keep that to a minimum
Where I've made a choice of a simpler algorithm where a better performing (but more complex) option can go if performance in that section later becomes critical
Since you're coding in an academic setting, and it sounds like your lecturer is going for verbose, I'd say just roll with it. Use code comments to explain how you are doing what you say you are doing in the design recipe.

I believe that it's worth to mention what PEP8 says, I mean, the pure concept.
Docstrings
Conventions for writing good documentation strings (a.k.a. "docstrings") are immortalized in PEP 257.
Write docstrings for all public modules, functions, classes, and methods. Docstrings are not necessary for non-public methods, but you should have a comment that describes what the method does. This comment should appear after the def line.
PEP 257 describes good docstring conventions. Note that most importantly, the """ that ends a multiline docstring should be on a line by itself, e.g.:
"""Return a foobang
Optional plotz says to frobnicate the bizbaz first.
"""
For one liner docstrings, please keep the closing """ on the same line.
Comments
Block comments
Generally apply to some (or all) code that follows them, and are indented to the same level as that code. Each line of a block comment starts with a # and a single space (unless it is indented text inside the comment).
Paragraphs inside a block comment are separated by a line containing a single #.
Inline Comments
Use inline comments sparingly.
An inline comment is a comment on the same line as a statement. Inline comments should be separated by at least two spaces from the statement. They should start with a # and a single space.
Inline comments are unnecessary and in fact distracting if they state the obvious.
Don't do this:
x = x + 1 # Increment x
But sometimes, this is useful:
x = x + 1 # Compensate for border
Reference
https://www.python.org/dev/peps/pep-0008/#documentation-strings
https://www.python.org/dev/peps/pep-0008/#inline-comments
https://www.python.org/dev/peps/pep-0008/#block-comments
https://www.python.org/dev/peps/pep-0257/

First of all, for formatting your posts you can use the help options above the text area you type your post.
And about comments and doc strings, the doc string is there to explain the overall use and basic information of the methods. On the other hand comments are meant to give specific information on blocks or lines, #TODO is used to remind you what you want to do in future, definition of variables and so on. By the way, in IDLE the doc string is shown as a tool tip when you hover over the method's name.

Quoting from this page http://www.pythonforbeginners.com/basics/python-docstrings/
Python documentation strings (or docstrings) provide a convenient way
of associating documentation with Python modules, functions, classes,
and methods.
An object's docsting is defined by including a string constant as the
first statement in the object's definition.
It's specified in source code that is used, like a comment, to
document a specific segment of code.
Unlike conventional source code comments the docstring should describe
what the function does, not how.
All functions should have a docstring
This allows the program to inspect these comments at run time, for
instance as an interactive help system, or as metadata.
Docstrings can be accessed by the __doc__ attribute on objects.
Docstrings can be accessed through a program (__doc__) where as inline comments cannot be accessed.
Interactive help systems like in bpython and IPython can use docstrings to display the docsting during the development. So that you dont have to visit the program everytime.

Related

Is it a good practice to use triple quotes to create "docstrings" in non-standard contexts?

I am looking at someone's code which has this kind of "docstrings" all over the place:
SLEEP_TIME_ON_FAILURE = 5
"""Time to keep the connection open in case of failure."""
SOCKET_TIMEOUT = 15
"""Socket timeout for inherited socket."""
...
As per the Python documentation, docstrings are applicable only in the context of the beginning of a module, class, or a method.
What is the implication of the above non-standard practice? Why does Python allow this? Doesn't this have performance impact?
As far as Python is concerned, these aren't docstrings. They're just string literals used as expression statements. You can do that - you can use any valid Python expression as its own statement. Python doesn't care whether the expression actually does anything. For a string on its own line, the only performance impact is a very small amount of extra work at bytecode compilation time; there's no impact at runtime, since these strings get optimized out.
Some documentation generators do look at these strings. For example, the very common Sphinx autodoc extension will parse these strings to document whatever is directly above them. Check whether you're using anything like that before you change the code.
In python, the """ is syntax for a multi-line string.
s1 = """This is a multi-line string."""
s2 = """This is also a multi-line
string that stretches
across multiple lines"""
If these strings are not stored into a variable, then they are immediately garbage collected, and are essentially ignored, but they still use some overhead. On the other hand, comments using # are actually completely ignored by the interpreter.
The only exception to this rule is when this docstring comes immediately after a function or class definition, or on top of a module. In that case, it is stored in the special variable __doc__.
According to PEP8,
Documentation Strings
Conventions for writing good documentation strings (a.k.a. "docstrings") are immortalized in PEP 257.
Write docstrings for all public modules, functions, classes, and methods. Docstrings are not necessary for non-public methods, but you should have a comment that describes what the method does. This comment should appear after the def line.
In those cases you should use a in-line comment, which the PEP8 style guide clearly defines https://www.python.org/dev/peps/pep-0008/#comments, e.g.:
SLEEP_TIME_ON_FAILURE = 5 # Time to keep the connection open in case of failure
SOCKET_TIMEOUT = 15 # Socket timeout for inherited socket

PEP8, locals() and interpolation

Here is some code:
foo = "Bears"
"Lions, Tigers and %(foo)s" % locals()
My PEP8 linter (SublimeLinter) complains about this, because foo is "unreferenced". My question is whether PEP8 should count this type of string interpolation as "referenced", or if there is a good reason to consider this "bad style".
Well, it isn't referenced. The part that's questionable style is using locals() to access variables instead of just accessing them by name. See this previous question for why that's a dubious idea. It's not a terrible thing, but it's not good style for a program that you want to maintain in the long term.
Edit: It's true that when you use a literal format string, it seems more explicit. But part of the point of the previous post is that in a larger program, you will probably wind up not using a literal format string. If it's a small program and you don't care, go ahead and use it. But warning about things that are likely to cause maintainability problems later is also part of what style guides and linters are for.
Also, locals isn't a canonical representation of names that are explicitly referenced in the literal. It's a canonical representation of all names in the local namespace. You can still do it if you like, but it's basically a loose/sloppy alternative to explicitly using the names you're using, which is again, exactly the sort of thing linters are supposed to warn you about.
Even if you reject BrenBarn's argument that foo isn't referenced, if you accept the argument that passing locals() in string formatting should be flagged, it may not be worth writing to code to consider foo referenced.
First, in every case where that extra code would help, the construct is not acceptable anyway, and the user is going to have to ignore a lint warning anyway. Yes, there is some harm in giving the user two lint warnings to ignore when there's only actually one problem, especially if one of the warnings is somewhat misleading. But is it enough harm to justify writing very complicated code and introduce new bugs into the linter?
You also have to consider that for this to actually work, the linter has to recognize not just % formatting, but also {} formatting, and every other kind of string formatting, HTML templating, etc. that the user could be using. In practice, this means handling various very common forms, and providing some kind of hook for the user to describe anything else.
And, on top of that, even if you don't think it should work with arbitrarily-generated format strings, it surely has to at least work with l10n. How is that going to work? If the format string is generated by something like gettext, the linter has no way of knowing whether foo is referenced, unless it can check all of the translations and see that at least one of them references foo—which means it has to understand (or have hooks to be taught) every string translation mechanism, and have access to the translation database.
So, I would suggest that, even if you consider the warning spurious in this case, you leave it there anyway. At most, add something which qualifies the warning:
foo is possibly unreferenced, but in a function that uses locals()
The following wouldn't make SublimeLinter happy either, It looks up each variable name referenced in the string and substitutes the corresponding value from the namespace mapping, which defaults to the caller's locals. As such it show the inherent limitation a utility like SublimeLinter has when trying to determine if something has been referenced in Python). My advice is just ignore SublimeLinter or add code to fake it out, like foo = foo. I've had to do something like the latter to get rid of C compiler warnings about things which were both legal and intended.
import re
import sys
SUB_RE = re.compile(r"%\((.*?)\)s")
def local_vars_subst(s, namespace=None):
if namespace is None:
namespace = sys._getframe(1).f_locals
def repl(matchobj):
var = matchobj.group(1).strip()
try:
retval = namespace[var]
except KeyError:
retval = "<undefined>"
return retval
return SUB_RE.sub(repl, s)
foo = "Bears"
print local_vars_subst("Lions, Tigers and %(foo)s")

Better Python list Naming Other than "list"

Is it better not to name list variables "list"? Since it's conflicted with the python reserved keyword. Then, what's the better naming? "input_list" sounds kinda awkward.
I know it can be problem-specific, but, say I have a quick sort function, then quick_sort(unsorted_list) is still kinda lengthy, since list passed to sorting function is clearly unsorted by context.
Any idea?
I like to name it with the plural of whatever's in it. So, for example, if I have a list of names, I call it names, and then I can write:
for name in names:
which I think looks pretty nice. But generally for your own sanity you should name your variables so that you can know what they are just from the name. This convention has the added benefit of being type-agnostic, just like Python itself, because names can be any iterable object such as a tuple, a dict, or your very own custom (iterable) object. You can use for name in names on any of those, and if you had a tuple called names_list that would just be weird.
(Added from a comment below:) There are a few situations where you don't have to do this. Using a canonical variable like i to index a short loop is OK because i is usually used that way. If your variable is used on more than one page worth of code, so that you can't see its entire lifetime at once, you should give it a sensible name.
goats
Variable names should refer what they are not just what type they are.
Python stands for readability. So basically you should name variables that promote readability. See PEP20.You should only have a general rule of consistency and should break this consistency in the following situations:
When applying the rule would make the code less readable, even for
someone who is used to reading code that follows the rules.
To be consistent with surrounding code that also breaks it (maybe for
historic reasons) -- although this is also an opportunity to clean up
someone else's mess (in true XP style)
Also, use the function naming rules: lowercase with words separated by underscores as necessary to improve readability.
All this is taken from PEP 8
Just use lst, or seq (for sequence)
I use a naming convention based on descriptive name and type. (I think I learned this from a Jeff Atwood blog post but I can't find it.)
goats_list
for goat in goats_list :
goat.bleat()
cow_hash = {}
etc.
Anything more complicated (list_list_hash_list) I make a class.
What about L?
Why not just use unsorted? I prefer to have names, which communicate ideas, not data types. There are special cases, where the type of a variable is important. But in most cases, it's obvious from the context - like in your case. Quick sort is obviously working on a list.

Is it bad style to reassign long variables as a local abbreviation?

I prefer to use long identifiers to keep my code semantically clear, but in the case of repeated references to the same identifier, I'd like for it to "get out of the way" in the current scope. Take this example in Python:
def define_many_mappings_1(self):
self.define_bidirectional_parameter_mapping("status", "current_status")
self.define_bidirectional_parameter_mapping("id", "unique_id")
self.define_bidirectional_parameter_mapping("location", "coordinates")
#etc...
Let's assume that I really want to stick with this long method name, and that these arguments are always going to be hard-coded.
Implementation 1 feels wrong because most of each line is taken up with a repetition of characters. The lines are also rather long in general, and will exceed 80 characters easily when nested inside of a class definition and/or a try/except block, resulting in ugly line wrapping. Let's try using a for loop:
def define_many_mappings_2(self):
mappings = [("status", "current_status"),
("id", "unique_id"),
("location", "coordinates")]
for mapping in mappings:
self.define_parameter_mapping(*mapping)
I'm going to lump together all similar iterative techniques under the umbrella of Implementation 2, which has the improvement of separating the "unique" arguments from the "repeated" method name. However, I dislike that this has the effect of placing the arguments before the method they're being passed into, which is confusing. I would prefer to retain the "verb followed by direct object" syntax.
I've found myself using the following as a compromise:
def define_many_mappings_3(self):
d = self.define_bidirectional_parameter_mapping
d("status", "current_status")
d("id", "unique_id")
d("location", "coordinates")
In Implementation 3, the long method is aliased by an extremely short "abbreviation" variable. I like this approach because it is immediately recognizable as a set of repeated method calls on first glance while having less redundant characters and much shorter lines. The drawback is the usage of an extremely short and semantically unclear identifier "d".
What is the most readable solution? Is the usage of an "abbreviation variable" acceptable if it is explicitly assigned from an unabbreviated version in the local scope?
itertools to the rescue again! Try using starmap - here's a simple demo:
list(itertools.starmap(min,[(1,2),(2,2),(3,2)]))
prints
[1,2,2]
starmap is a generator, so to actually invoke the methods, you have to consume the generator with a list.
import itertools
def define_many_mappings_4(self):
list(itertools.starmap(
self.define_parameter_mapping,
[
("status", "current_status"),
("id", "unique_id"),
("location", "coordinates"),
] ))
Normally I'm not a fan of using a dummy list construction to invoke a sequence of functions, but this arrangement seems to address most of your concerns.
If define_parameter_mapping returns None, then you can replace list with any, and then all of the function calls will get made, and you won't have to construct that dummy list.
I would go with Implementation 2, but it is a close call.
I think #2 and #3 are equally readable. Imagine if you had 100s of mappings... Either way, I cannot tell what the code at the bottom is doing without scrolling to the top. In #2 you are giving a name to the data; in #3, you are giving a name to the function. It's basically a wash.
Changing the data is also a wash, since either way you just add one line in the same pattern as what is already there.
The difference comes if you want to change what you are doing to the data. For example, say you decide to add a debug message for each mapping you define. With #2, you add a statement to the loop, and it is still easy to read. With #3, you have to create a lambda or something. Nothing wrong with lambdas -- I love Lisp as much as anybody -- but I think I would still find #2 easier to read and modify.
But it is a close call, and your taste might be different.
I think #3 is not bad although I might pick a slightly longer identifier than d, but often this type of thing becomes data driven, so then you would find yourself using a variation of #2 where you are looping over the result of a database query or something from a config file
There's no right answer, so you'll get opinions on all sides here, but I would by far prefer to see #2 in any code I was responsible for maintaining.
#1 is verbose, repetitive, and difficult to change (e.g. say you need to call two methods on each pair or add logging -- then you must change every line). But this is often how code evolves, and it is a fairly familiar and harmless pattern.
#3 suffers the same problem as #1, but is slightly more concise at the cost of requiring what is basically a macro and thus new and slightly unfamiliar terms.
#2 is simple and clear. It lays out your mappings in data form, and then iterates them using basic language constructs. To add new mappings, you only need add a line to the array. You might end up loading your mappings from an external file or URL down the line, and that would be an easy change. To change what is done with them, you only need change the body of your for loop (which itself could be made into a separate function if the need arose).
Your complaint of #2 of "object before verb" doesn't bother me at all. In scanning that function, I would basically first assume the verb does what it's supposed to do and focus on the object, which is now clear and immediately visible and maintainable. Only if there were problems would I look at the verb, and it would be immediately evident what it is doing.

Python code readability

I have a programming experience with statically typed languages. Now writing code in Python I feel difficulties with its readability. Lets say I have a class Host:
class Host(object):
def __init__(self, name, network_interface):
self.name = name
self.network_interface = network_interface
I don't understand from this definition, what "network_interface" should be. Is it a string, like "eth0" or is it an instance of a class NetworkInterface? The only way I'm thinking about to solve this is a documenting the code with a "docstring". Something like this:
class Host(object):
''' Attributes:
#name: a string
#network_interface: an instance of class NetworkInterface'''
Or may be there are name conventions for things like that?
Using dynamic languages will teach you something about static languages: all the help you got from the static language that you now miss in the dynamic language, it wasn't all that helpful.
To use your example, in a static language, you'd know that the parameter was a string, and in Python you don't. So in Python you write a docstring. And while you're writing it, you realize you had more to say about it than, "it's a string". You need to say what data is in the string, and what format it should have, and what the default is, and something about error conditions.
And then you realize you should have written all that down for your static language as well. Sure, Java would force you know that it was a string, but there's all these other details that need to be specified, and you have to manually do that work in any language.
The docstring conventions are at PEP 257.
The example there follows this format for specifying arguments, you can add the types if they matter:
def complex(real=0.0, imag=0.0):
"""Form a complex number.
Keyword arguments:
real -- the real part (default 0.0)
imag -- the imaginary part (default 0.0)
"""
if imag == 0.0 and real == 0.0: return complex_zero
...
There was also a rejected PEP for docstrings for attributes ( rather than constructor arguments ).
The most pythonic solution is to document with examples. If possible, state what operations an object must support to be acceptable, rather than a specific type.
class Host(object):
def __init__(self, name, network_interface)
"""Initialise host with given name and network_interface.
network_interface -- must support the same operations as NetworkInterface
>>> network_interface = NetworkInterface()
>>> host = Host("my_host", network_interface)
"""
...
At this point, hook your source up to doctest to make sure your doc examples continue to work in future.
Personally I found very usefull to use pylint to validate my code.
If you follow pylint suggestion almost automatically your code become more readable,
you will improve your python writing skills, respect naming conventions. You can also define your own naming conventions and so on. It's very useful specially for a python beginner.
I suggest you to use.
Python, though not as overtly typed as C or Java, is still typed and will throw exceptions if you're doing things with types that simply do not play nice together.
To that end, if you're concerned about your code being used correctly, maintained correctly, etc. simply use docstrings, comments, or even more explicit variable names to indicate what the type should be.
Even better yet, include code that will allow it to handle whichever type it may be passed as long as it yields a usable result.
One benefit of static typing is that types are a form of documentation. When programming in Python, you can document more flexibly and fluently. Of course in your example you want to say that network_interface should implement NetworkInterface, but in many cases the type is obvious from the context, variable name, or by convention, and in these cases by omitting the obvious you can produce more readable code. Common is to describe the meaning of a parameter and implicitly giving the type.
For example:
def Bar(foo, count):
"""Bar the foo the given number of times."""
...
This describes the function tersely and precisely. What foo and bar mean will be obvious from context, and that count is a (positive) integer is implicit.
For your example, I'd just mention the type in the document string:
"""Create a named host on the given NetworkInterface."""
This is shorter, more readable, and contains more information than a listing of the types.

Categories