Here is some code:
foo = "Bears"
"Lions, Tigers and %(foo)s" % locals()
My PEP8 linter (SublimeLinter) complains about this, because foo is "unreferenced". My question is whether PEP8 should count this type of string interpolation as "referenced", or if there is a good reason to consider this "bad style".
Well, it isn't referenced. The part that's questionable style is using locals() to access variables instead of just accessing them by name. See this previous question for why that's a dubious idea. It's not a terrible thing, but it's not good style for a program that you want to maintain in the long term.
Edit: It's true that when you use a literal format string, it seems more explicit. But part of the point of the previous post is that in a larger program, you will probably wind up not using a literal format string. If it's a small program and you don't care, go ahead and use it. But warning about things that are likely to cause maintainability problems later is also part of what style guides and linters are for.
Also, locals isn't a canonical representation of names that are explicitly referenced in the literal. It's a canonical representation of all names in the local namespace. You can still do it if you like, but it's basically a loose/sloppy alternative to explicitly using the names you're using, which is again, exactly the sort of thing linters are supposed to warn you about.
Even if you reject BrenBarn's argument that foo isn't referenced, if you accept the argument that passing locals() in string formatting should be flagged, it may not be worth writing to code to consider foo referenced.
First, in every case where that extra code would help, the construct is not acceptable anyway, and the user is going to have to ignore a lint warning anyway. Yes, there is some harm in giving the user two lint warnings to ignore when there's only actually one problem, especially if one of the warnings is somewhat misleading. But is it enough harm to justify writing very complicated code and introduce new bugs into the linter?
You also have to consider that for this to actually work, the linter has to recognize not just % formatting, but also {} formatting, and every other kind of string formatting, HTML templating, etc. that the user could be using. In practice, this means handling various very common forms, and providing some kind of hook for the user to describe anything else.
And, on top of that, even if you don't think it should work with arbitrarily-generated format strings, it surely has to at least work with l10n. How is that going to work? If the format string is generated by something like gettext, the linter has no way of knowing whether foo is referenced, unless it can check all of the translations and see that at least one of them references foo—which means it has to understand (or have hooks to be taught) every string translation mechanism, and have access to the translation database.
So, I would suggest that, even if you consider the warning spurious in this case, you leave it there anyway. At most, add something which qualifies the warning:
foo is possibly unreferenced, but in a function that uses locals()
The following wouldn't make SublimeLinter happy either, It looks up each variable name referenced in the string and substitutes the corresponding value from the namespace mapping, which defaults to the caller's locals. As such it show the inherent limitation a utility like SublimeLinter has when trying to determine if something has been referenced in Python). My advice is just ignore SublimeLinter or add code to fake it out, like foo = foo. I've had to do something like the latter to get rid of C compiler warnings about things which were both legal and intended.
import re
import sys
SUB_RE = re.compile(r"%\((.*?)\)s")
def local_vars_subst(s, namespace=None):
if namespace is None:
namespace = sys._getframe(1).f_locals
def repl(matchobj):
var = matchobj.group(1).strip()
try:
retval = namespace[var]
except KeyError:
retval = "<undefined>"
return retval
return SUB_RE.sub(repl, s)
foo = "Bears"
print local_vars_subst("Lions, Tigers and %(foo)s")
Related
PEP 8 recommends using a single trailing underscore to avoid conflicts with python keywords, but what about conflicts with module names for standard python modules? Should that also be a single trailing underscore?
I'm imagining something like this:
import time
time_ = time.time()
PEP 8 doesn't seem to address it directly.
The trailing underscore is obviously necessary when you're colliding with a keyword, because your code would otherwise raise a SyntaxError (or, if you're really unlucky, compile to mean something completely different than you intended).
So, even in contexts where you have a class attribute, instance attribute, function parameter, or local variable that you want to name class, you have to go with class_ instead.
But the same isn't true for time. And I think in those cases, you shouldn't postfix an underscore for time.
There's precedent for that—multiple classes in the stdlib itself have methods or data attributes named time (and none of them have time_).
Of course there's the case where you're creating a name at the same scope as the module (usually meaning a global variable or function). Then you've got much more potential for confusion, and hiding the ability to access anything on the time module for the rest of the current scope.
I think 90% of the time, the answer is going to be "That shouldn't be a global".
But that still leaves the other 10%.
And there's also the case where your name is in a restricted namespace, but that namespace is a local scope inside a function where you need to access the time module.
Or, maybe, in a long, complicated function (which you shouldn't have any of, but… sometimes you do). If it wouldn't be obvious to a human reader that time is a local rather than the module, that's just as bad as confusing the interpreter.
Here, I think that 99% of the remaining time, the answer is "Just pick a different name".
For example, look at this code:
def dostuff(iterable):
time = time.time()
for thing in iterable:
dothing(thing)
return time.time() - time # oops!
The obvious answer here is to rename the variable start or t0 or something else. Besides solving the problem, it's also a more meaningful name.
But that still leaves the 1%.
For example, there are libraries that generate Python code out of, say, a protocol specification, or a .NET or ObjC interface, where the names aren't under your control; all you can do is apply some kind of programmatic and unambiguous rule to the translated names. In that case, I think a rule that appends _ to stdlib module names as well as keywords might be a good idea.
You can probably come up with other examples where the variable can't just be arbitrarily renamed, and has to (at least potentially) live in the same scope as the time module, and so on. In any such cases, I'd go for the _ suffix.
There are many questions on SO about using Python's eval on insecure strings (eg.: Security of Python's eval() on untrusted strings?, Python: make eval safe). The unanimous answer is that this is a bad idea.
However, I found little information on which strings can be considered safe (if any).
Now I'm wondering if there is a definition of "safe strings" available (eg.: a string that only contains lower case ascii chars or any of the signs +-*/()). The exploits I found generally relied on either of _.,:[]'" or the like. Can such an approach be secure (for use in a graph painting web application)?
Otherwise, I guess using a parsing package as Alex Martelli suggested is the only way.
EDIT:
Unfortunately, there are neither answers that give a compelling explanation for why/ how the above strings are to be considered insecure (a tiny working exploit) nor explanations for the contrary. I am aware that using eval should be avoided, but that's not the question. Hence, I'll award a bounty to the first who comes up with either a working exploit or a really good explanation why a string mangled as described above is to be considered (in)secure.
Here you have a working "exploit" with your restrictions in place - only contains lower case ascii chars or any of the signs +-*/() .
It relies on a 2nd eval layer.
def mask_code( python_code ):
s="+".join(["chr("+str(ord(i))+")" for i in python_code])
return "eval("+s+")"
bad_code='''__import__("os").getcwd()'''
masked= mask_code( bad_code )
print masked
print eval(bad_code)
output:
eval(chr(111)+chr(115)+chr(46)+chr(103)+chr(101)+chr(116)+chr(99)+chr(119)+chr(100)+chr(40)+chr(41))
/home/user
This is a very trivial "exploit". I'm sure there's countless others, even with further character restrictions.
It bears repeating that one should always use a parser or ast.literal_eval(). Only by parsing the tokens can one be sure the string is safe to evaluate. Anything else is betting against the house.
No, there isn't, or at least, not a sensible, truly secure way. Python is a highly dynamic language, and the flipside of that is that it's very easy to subvert any attempt to lock the language down.
You either need to write your own parser for the subset you want, or use something existing, like ast.literal_eval(), for particular cases as you come across them. Use a tool designed for the job at hand, rather than trying to force an existing one to do the job you want, badly.
Edit:
An example of two strings, that, while fitting your description, if eval()ed in order, would execute arbitrary code (this particular example running evil.__method__().
"from binascii import *"
"eval(unhexlify('6576696c2e5f5f6d6574686f645f5f2829'))"
An exploit similar to goncalopp's but that also satisfy the restriction that the string 'eval' is not a substring of the exploit:
def to_chrs(text):
return '+'.join('chr(%d)' % ord(c) for c in text)
def _make_getattr_call(obj, attr):
return 'getattr(*(list(%s for a in chr(1)) + list(%s for a in chr(1))))' % (obj, attr)
def make_exploit(code):
get = to_chrs('get')
builtins = to_chrs('__builtins__')
eval = to_chrs('eval')
code = to_chrs(code)
return (_make_getattr_call(
_make_getattr_call('globals()', '{get}') + '({builtins})',
'{eval}') + '({code})').format(**locals())
It uses a combination of genexp and tuple unpacking to call getattr with two arguments without using the comma.
An example usage:
>>> exploit = make_exploit('__import__("os").system("echo $PWD")')
>>> print exploit
getattr(*(list(getattr(*(list(globals() for a in chr(1)) + list(chr(103)+chr(101)+chr(116) for a in chr(1))))(chr(95)+chr(95)+chr(98)+chr(117)+chr(105)+chr(108)+chr(116)+chr(105)+chr(110)+chr(115)+chr(95)+chr(95)) for a in chr(1)) + list(chr(101)+chr(118)+chr(97)+chr(108) for a in chr(1))))(chr(95)+chr(95)+chr(105)+chr(109)+chr(112)+chr(111)+chr(114)+chr(116)+chr(95)+chr(95)+chr(40)+chr(34)+chr(111)+chr(115)+chr(34)+chr(41)+chr(46)+chr(115)+chr(121)+chr(115)+chr(116)+chr(101)+chr(109)+chr(40)+chr(34)+chr(101)+chr(99)+chr(104)+chr(111)+chr(32)+chr(36)+chr(80)+chr(87)+chr(68)+chr(34)+chr(41))
>>> eval(exploit)
/home/giacomo
0
This proves that to define restrictions only on the text that make the code safe is really hard. Even things like 'eval' in code are not safe. Either you must remove the possibility of executing a function call at all, or you must remove all dangerous built-ins from eval's environment. My exploit also shows that getattr is as bad as eval even when you can not use the comma, since it allows you to walk arbitrary into the object hierarchy. For example you can obtain the real eval function even if the environment does not provide it:
def real_eval():
get_subclasses = _make_getattr_call(
_make_getattr_call(
_make_getattr_call('()',
to_chrs('__class__')),
to_chrs('__base__')),
to_chrs('__subclasses__')) + '()'
catch_warnings = 'next(c for c in %s if %s == %s)()' % (get_subclasses,
_make_getattr_call('c',
to_chrs('__name__')),
to_chrs('catch_warnings'))
return _make_getattr_call(
_make_getattr_call(
_make_getattr_call(catch_warnings, to_chrs('_module')),
to_chrs('__builtins__')),
to_chrs('get')) + '(%s)' % to_chrs('eval')
>>> no_eval = __builtins__.__dict__.copy()
>>> del no_eval['eval']
>>> eval(real_eval(), {'__builtins__': no_eval})
<built-in function eval>
Even though if you remove all the built-ins, then the code becomes safe:
>>> eval(real_eval(), {'__builtins__': None})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 1, in <module>
NameError: name 'getattr' is not defined
Note that setting '__builtins__' to None removes also chr, list, tuple etc.
The combo of your character restrinctions and '__builtins__' to None is completely safe, because the user has no way to access anything. He can't use the ., the brackets [] or any built-in function or type.
Even though I must say in this way what you can evaluate is pretty limited. You can't do much more than do operations on numbers.
Probably it's enough to remove eval, getattr, and chr from the built-ins to make the code safe, at least I can't think of a way to write an exploit that does not use one of them.
A "parsing" approach is probably safer and gives more flexibility. For example this recipe is pretty good and is also easily customizable to add more restrictions.
To study how to make safe eval I suggest RestrictedPython module (over 10 years of production usage, one fine piece of Python software)
http://pypi.python.org/pypi/RestrictedPython
RestrictedPython takes Python source code and modifies its AST (Abstract Syntax Tree) to make the evaluation safe within the sandbox, without leaking any Python internals which might allow to escape the sandbox.
From RestrictedPython source code you'll learn what kind of tricks are needed to perform to make Python sandboxed safe.
You probably should avoid eval, actually.
But if your stuck with it, you could just make sure your strings are alphanumeric. That should be safe.
Assuming the named functions exist and are safe:
if re.match("^(?:safe|soft|cotton|ball|[()])+$", code): eval(code)
It's not enough to create input sanitization routines. You must also ensure that sanitization is not once accidentally omitted. One way to do that is taint checking.
Is it common in Python to keep testing for type values when working in a OOP fashion?
class Foo():
def __init__(self,barObject):
self.bar = setBarObject(barObject)
def setBarObject(barObject);
if (isInstance(barObject,Bar):
self.bar = barObject
else:
# throw exception, log, etc.
class Bar():
pass
Or I can use a more loose approach, like:
class Foo():
def __init__(self,barObject):
self.bar = barObject
class Bar():
pass
Nope, in fact it's overwhelmingly common not to test for type values, as in your second approach. The idea is that a client of your code (i.e. some other programmer who uses your class) should be able to pass any kind of object that has all the appropriate methods or properties. If it doesn't happen to be an instance of some particular class, that's fine; your code never needs to know the difference. This is called duck typing, because of the adage "If it quacks like a duck and flies like a duck, it might as well be a duck" (well, that's not the actual adage but I got the gist of it I think)
One place you'll see this a lot is in the standard library, with any functions that handle file input or output. Instead of requiring an actual file object, they'll take anything that implements the read() or readline() method (depending on the function), or write() for writing. In fact you'll often see this in the documentation, e.g. with tokenize.generate_tokens, which I just happened to be looking at earlier today:
The generate_tokens() generator requires one argument, readline, which must be a callable object which provides the same interface as the readline() method of built-in file objects (see section File Objects). Each call to the function should return one line of input as a string.
This allows you to use a StringIO object (like an in-memory file), or something wackier like a dialog box, in place of a real file.
In your own code, just access whatever properties of an object you need, and if it's the wrong kind of object, one of the properties you need won't be there and it'll throw an exception.
I think that it's good practice to check input for type. It's reasonable to assume that if you asked a user to give one data type they might give you another, so you should code to defend against this.
However, it seems like a waste of time (both writing and running the program) to check the type of input that the program generates independent of input. As in a strongly-typed language, checking type isn't important to defend against programmer error.
So basically, check input but nothing else so that code can run smoothly and users don't have to wonder why they got an exception rather than a result.
If your alternative to the type check is an else containing exception handling, then you should really consider duck typing one tier up, supporting as many objects with the methods you require from the input, and working inside a try.
You can then except (and except as specifically as possible) that.
The final result wouldn't be unlike what you have there, but a lot more versatile and Pythonic.
Everything else that needed to be said about the actual question, whether it's common/good practice or not, I think has been answered excellently by David's.
I agree with some of the above answers, in that I generally never check for type from one function to another.
However, as someone else mentioned, anything accepted from a user should be checked, and for things like this I use regular expressions. The nice thing about using regular expressions to validate user input is that not only can you verify that the data is in the correct format, but you can parse the input into a more convenient form, like a string into a dictionary.
Usually declaring variables on assignment is considered a best practice in VBScript or JavaScript , for example, although it is allowed.
Why does Python force you to create the variable only when you use it? Since Python is case sensitive can't it cause bugs because you misspelled a variable's name?
How would you avoid such a situation?
It's a silly artifact of Python's inspiration by "teaching languages", and it serves to make the language more accessible by removing the stumbling block of "declaration" entirely. For whatever reason (probably represented as "simplicity"), Python never gained an optional stricture like VB's "Option Explicit" to introduce mandatory declarations. Yes, it can be a source of bugs, but as the other answers here demonstrate, good coders can develop habits that allow them to compensate for pretty much any shortcoming in the language -- and as shortcomings go, this is a pretty minor one.
If you want a class with "locked-down" instance attributes, it's not hard to make one, e.g.:
class LockedDown(object):
__locked = False
def __setattr__(self, name, value):
if self.__locked:
if name[:2] != '__' and name not in self.__dict__:
raise ValueError("Can't set attribute %r" % name)
object.__setattr__(self, name, value)
def _dolock(self):
self.__locked = True
class Example(LockedDown):
def __init__(self):
self.mistakes = 0
self._dolock()
def onemore(self):
self.mistakes += 1
print self.mistakes
def reset(self):
self.mitsakes = 0
x = Example()
for i in range(3): x.onemore()
x.reset()
As you'll see, the calls to x.onemore work just fine, but reset raises an exception because of the mis-spelling of the attribute as mitsakes. The rules of engagement here are that __init__ must set all attributes to initial values, then call self._dolock() to forbid any further addition of attributes. I'm exempting "super-private" attributes (ones starting with __), which stylistically should be used very rarely, for totally specific roles, and with extremely limited scope (making it trivial to spot typos in the super-careful inspection that's needed anyway to confirm the need for super-privacy), but that's a stylistic choice, easy to reverse; similarly for the choice to make the locked-down state "irreversible" (by "normal" means -- i.e. requiring very explicit workaround to bypass).
This doesn't apply to other kinds of names, such as function-local ones; again, no big deal because each function should be very small, and is a totally self-contained scope, trivially easy to inspect (if you write 100-lines functions, you have other, serious problems;-).
Is this worth the bother? No, because semi-decent unit tests should obviously catch all such typos with the greatest of ease, as a natural side effect of thoroughly exercising the class's functionality. In other words, it's not as if you need to have more unit tests just to catch the typos: the unit tests you need anyway to catch trivial semantic errors (off-by-one, +1 where -1 is meant, etc., etc.) will already catch all typos, too.
Robert Martin and Bruce Eckel both articulated this point 7 years ago in separate and independent articles -- Eckel's blog is temporarily down right now, but Martin's right here, and when Eckel's site revives the article should be here. The thesis is controversial (Jeff Attwood and his commenters debate it here, for example), but it's interesting to note that Martin and Eckel are both well-known experts of static languages such as C++ and Java (albeit with love affairs, respectively, with Ruby and Python), and they're far from the only ones to have discovered the importance of unit-tests... and how a good unit-tests suite, as a side effect, makes a static language's rigidity redundant.
By the way, one way to check your test suites is "error injection": systematically go over your codebase introducing one mis-spelling -- run the tests to make sure they do fail, if they don't add one that does fail, correct the spelling mistake, repeat. Can be fairly well automated (not the "add a test" part, but the finding of potential errors that aren't covered by the suite), as can some other forms of error injections (change every integer constant, one by one, to one more, and to one less; change each < to <= etc; swap each if and while condition to its reverse; ...), while other forms of error-injection yet require a lot more human savvy. Unfortunately I don't know of publicly available suites of error injection frameworks (for any language) -- might make a cool open source project;-).
In python it helps to think of declaring variables as binding values to names.
Try not to misspell them, or you will have new ones (assuming you are talking about assignment statements - referencing them will cause an exception).
If you are talking about instance variables, you won't be able to use them afterwards.
For example, if you had a class myclass and in its __init__ method wrote self.myvar = 0, then trying to reference self.myvare will cause an error, rather than give you a default value.
Python never forces you to create a variable only when you use it. You can always bind None to a name and then use the name elsewhere later.
To avoid a situation with misspelling variable names, I use a text-editor with an autocompletion function and binded
python -c "import py_compile; py_compile.compile('{filename}')"
to a function to be called when I save a file.
Test.
Example, with file variable.py:
#! /usr/bin/python
somevar = 5
Then, make file variable.txt (to hold the tests):
>>> import variables
>>> variables.somevar == 4
True
Then do:
python -m doctest variable.txt
And get:
**********************************************************************
File "variables.txt", line 2, in variables.test
Failed example:
variables.somevar == 4
Expected:
True
Got:
False
**********************************************************************
1 items had failures:
1 of 2 in variables.test
***Test Failed*** 1 failures.
This shows a variable declared incorrectly.
Try:
>>> import variables
>>> variables.someothervar == 5
True
Note that the variable is not named the same.
**********************************************************************
File "variables.test", line 2, in variables.test
Failed example:
variables.someothervar == 5
Exception raised:
Traceback (most recent call last):
File "/usr/local/lib/python2.6/doctest.py", line 1241, in __run
compileflags, 1) in test.globs
File "<doctest variables.test[1]>", line 1, in <module>
variables.someothervar == 5
AttributeError: 'module' object has no attribute 'someothervar'
**********************************************************************
1 items had failures:
1 of 2 in variables.test
***Test Failed*** 1 failures.
This shows a misspelled variable.
>>> import variables
>>> variables.somevar == 5
True
And this returns with no error.
I've done enough VBScript development to know that typos are a problem in variable name, and enough VBScript development to know that Option Explicit is a crutch at best. (<- 12 years of ASP VBScript experience taught me that the hard way.)
If you do any serious development you'll use a (integrated) development environment. Pylint will be part of it and tell you all your misspellings. No need to make such a feature part of the langauge.
Variable declaration does not prevent bugs. Any more than lack of variable declaration causes bugs.
Variable declarations prevent one specific type of bug, but it creates other types bugs.
Prevent. Writing code where there's an attempt to set (or change) a variable with the wrong type of data.
Causes. Stupid workarounds to coerce a number of unrelated types together so that assignments will "just work". Example: The C language union. Also, variable declarations force us to use casts. Which also forces us to suppress warnings on casts at compile time because we "know" it will "just work". And it doesn't.
Lack of variable declarations does not cause bugs. The most common "threat scenario" is some kind of "mis-assignment" to a variable.
Was the variable being "reused"? This is dumb but legal and works.
Was some part of the program incorrectly assigning the wrong type?
That leads to a subtle question of "what does wrong mean?" In a duck-typed language, wrong means "Doesn't offer the right methods or attributes." Which is still nebulous. Specifically, it means "the type will be asked to provide a method or attribute it doesn't have." Which will raise an exception and the program will stop.
Raising an uncaught exception in production use is annoying and shows a lack of quality. It's stupid, but it's also a detected, known failure mode with a traceback to the exact root cause.
"can't it cause bugs because you misspelled a variable's name"
Yes. It can.
But consider this Java code.
public static void maine( String[] argv ) {
int main;
int mian;
}
A misspelling here is equally fatal. Statically typed Java has done nothing to prevent a misspelled variable name from causing a bug.
I have a programming experience with statically typed languages. Now writing code in Python I feel difficulties with its readability. Lets say I have a class Host:
class Host(object):
def __init__(self, name, network_interface):
self.name = name
self.network_interface = network_interface
I don't understand from this definition, what "network_interface" should be. Is it a string, like "eth0" or is it an instance of a class NetworkInterface? The only way I'm thinking about to solve this is a documenting the code with a "docstring". Something like this:
class Host(object):
''' Attributes:
#name: a string
#network_interface: an instance of class NetworkInterface'''
Or may be there are name conventions for things like that?
Using dynamic languages will teach you something about static languages: all the help you got from the static language that you now miss in the dynamic language, it wasn't all that helpful.
To use your example, in a static language, you'd know that the parameter was a string, and in Python you don't. So in Python you write a docstring. And while you're writing it, you realize you had more to say about it than, "it's a string". You need to say what data is in the string, and what format it should have, and what the default is, and something about error conditions.
And then you realize you should have written all that down for your static language as well. Sure, Java would force you know that it was a string, but there's all these other details that need to be specified, and you have to manually do that work in any language.
The docstring conventions are at PEP 257.
The example there follows this format for specifying arguments, you can add the types if they matter:
def complex(real=0.0, imag=0.0):
"""Form a complex number.
Keyword arguments:
real -- the real part (default 0.0)
imag -- the imaginary part (default 0.0)
"""
if imag == 0.0 and real == 0.0: return complex_zero
...
There was also a rejected PEP for docstrings for attributes ( rather than constructor arguments ).
The most pythonic solution is to document with examples. If possible, state what operations an object must support to be acceptable, rather than a specific type.
class Host(object):
def __init__(self, name, network_interface)
"""Initialise host with given name and network_interface.
network_interface -- must support the same operations as NetworkInterface
>>> network_interface = NetworkInterface()
>>> host = Host("my_host", network_interface)
"""
...
At this point, hook your source up to doctest to make sure your doc examples continue to work in future.
Personally I found very usefull to use pylint to validate my code.
If you follow pylint suggestion almost automatically your code become more readable,
you will improve your python writing skills, respect naming conventions. You can also define your own naming conventions and so on. It's very useful specially for a python beginner.
I suggest you to use.
Python, though not as overtly typed as C or Java, is still typed and will throw exceptions if you're doing things with types that simply do not play nice together.
To that end, if you're concerned about your code being used correctly, maintained correctly, etc. simply use docstrings, comments, or even more explicit variable names to indicate what the type should be.
Even better yet, include code that will allow it to handle whichever type it may be passed as long as it yields a usable result.
One benefit of static typing is that types are a form of documentation. When programming in Python, you can document more flexibly and fluently. Of course in your example you want to say that network_interface should implement NetworkInterface, but in many cases the type is obvious from the context, variable name, or by convention, and in these cases by omitting the obvious you can produce more readable code. Common is to describe the meaning of a parameter and implicitly giving the type.
For example:
def Bar(foo, count):
"""Bar the foo the given number of times."""
...
This describes the function tersely and precisely. What foo and bar mean will be obvious from context, and that count is a (positive) integer is implicit.
For your example, I'd just mention the type in the document string:
"""Create a named host on the given NetworkInterface."""
This is shorter, more readable, and contains more information than a listing of the types.