Python code readability - python

I have a programming experience with statically typed languages. Now writing code in Python I feel difficulties with its readability. Lets say I have a class Host:
class Host(object):
def __init__(self, name, network_interface):
self.name = name
self.network_interface = network_interface
I don't understand from this definition, what "network_interface" should be. Is it a string, like "eth0" or is it an instance of a class NetworkInterface? The only way I'm thinking about to solve this is a documenting the code with a "docstring". Something like this:
class Host(object):
''' Attributes:
#name: a string
#network_interface: an instance of class NetworkInterface'''
Or may be there are name conventions for things like that?

Using dynamic languages will teach you something about static languages: all the help you got from the static language that you now miss in the dynamic language, it wasn't all that helpful.
To use your example, in a static language, you'd know that the parameter was a string, and in Python you don't. So in Python you write a docstring. And while you're writing it, you realize you had more to say about it than, "it's a string". You need to say what data is in the string, and what format it should have, and what the default is, and something about error conditions.
And then you realize you should have written all that down for your static language as well. Sure, Java would force you know that it was a string, but there's all these other details that need to be specified, and you have to manually do that work in any language.

The docstring conventions are at PEP 257.
The example there follows this format for specifying arguments, you can add the types if they matter:
def complex(real=0.0, imag=0.0):
"""Form a complex number.
Keyword arguments:
real -- the real part (default 0.0)
imag -- the imaginary part (default 0.0)
"""
if imag == 0.0 and real == 0.0: return complex_zero
...
There was also a rejected PEP for docstrings for attributes ( rather than constructor arguments ).

The most pythonic solution is to document with examples. If possible, state what operations an object must support to be acceptable, rather than a specific type.
class Host(object):
def __init__(self, name, network_interface)
"""Initialise host with given name and network_interface.
network_interface -- must support the same operations as NetworkInterface
>>> network_interface = NetworkInterface()
>>> host = Host("my_host", network_interface)
"""
...
At this point, hook your source up to doctest to make sure your doc examples continue to work in future.

Personally I found very usefull to use pylint to validate my code.
If you follow pylint suggestion almost automatically your code become more readable,
you will improve your python writing skills, respect naming conventions. You can also define your own naming conventions and so on. It's very useful specially for a python beginner.
I suggest you to use.

Python, though not as overtly typed as C or Java, is still typed and will throw exceptions if you're doing things with types that simply do not play nice together.
To that end, if you're concerned about your code being used correctly, maintained correctly, etc. simply use docstrings, comments, or even more explicit variable names to indicate what the type should be.
Even better yet, include code that will allow it to handle whichever type it may be passed as long as it yields a usable result.

One benefit of static typing is that types are a form of documentation. When programming in Python, you can document more flexibly and fluently. Of course in your example you want to say that network_interface should implement NetworkInterface, but in many cases the type is obvious from the context, variable name, or by convention, and in these cases by omitting the obvious you can produce more readable code. Common is to describe the meaning of a parameter and implicitly giving the type.
For example:
def Bar(foo, count):
"""Bar the foo the given number of times."""
...
This describes the function tersely and precisely. What foo and bar mean will be obvious from context, and that count is a (positive) integer is implicit.
For your example, I'd just mention the type in the document string:
"""Create a named host on the given NetworkInterface."""
This is shorter, more readable, and contains more information than a listing of the types.

Related

How do I access attributes of a superclass from within a subclass? [duplicate]

In other languages, a general guideline that helps produce better code is always make everything as hidden as possible. If in doubt about whether a variable should be private or protected, it's better to go with private.
Does the same hold true for Python? Should I use two leading underscores on everything at first, and only make them less hidden (only one underscore) as I need them?
If the convention is to use only one underscore, I'd also like to know the rationale.
Here's a comment I left on JBernardo's answer. It explains why I asked this question and also why I'd like to know why Python is different from the other languages:
I come from languages that train you to think everything should be only as public as needed and no more. The reasoning is that this will reduce dependencies and make the code safer to alter. The Python way of doing things in reverse -- starting from public and going towards hidden -- is odd to me.
When in doubt, leave it "public" - I mean, do not add anything to obscure the name of your attribute. If you have a class with some internal value, do not bother about it. Instead of writing:
class Stack(object):
def __init__(self):
self.__storage = [] # Too uptight
def push(self, value):
self.__storage.append(value)
write this by default:
class Stack(object):
def __init__(self):
self.storage = [] # No mangling
def push(self, value):
self.storage.append(value)
This is for sure a controversial way of doing things. Python newbies hate it, and even some old Python guys despise this default - but it is the default anyway, so I recommend you to follow it, even if you feel uncomfortable.
If you really want to send the message "Can't touch this!" to your users, the usual way is to precede the variable with one underscore. This is just a convention, but people understand it and take double care when dealing with such stuff:
class Stack(object):
def __init__(self):
self._storage = [] # This is ok, but Pythonistas use it to be relaxed about it
def push(self, value):
self._storage.append(value)
This can be useful, too, for avoiding conflict between property names and attribute names:
class Person(object):
def __init__(self, name, age):
self.name = name
self._age = age if age >= 0 else 0
#property
def age(self):
return self._age
#age.setter
def age(self, age):
if age >= 0:
self._age = age
else:
self._age = 0
What about the double underscore? Well, we use the double underscore magic mainly to avoid accidental overloading of methods and name conflicts with superclasses' attributes. It can be pretty valuable if you write a class to be extended many times.
If you want to use it for other purposes, you can, but it is neither usual nor recommended.
EDIT: Why is this so? Well, the usual Python style does not emphasize making things private - on the contrary! There are many reasons for that - most of them controversial... Let us see some of them.
Python has properties
Today, most OO languages use the opposite approach: what should not be used should not be visible, so attributes should be private. Theoretically, this would yield more manageable, less coupled classes because no one would change the objects' values recklessly.
However, it is not so simple. For example, Java classes have many getters that only get the values and setters that only set the values. You need, let us say, seven lines of code to declare a single attribute - which a Python programmer would say is needlessly complex. Also, you write a lot of code to get one public field since you can change its value using the getters and setters in practice.
So why follow this private-by-default policy? Just make your attributes public by default. Of course, this is problematic in Java because if you decide to add some validation to your attribute, it would require you to change all:
person.age = age;
in your code to, let us say,
person.setAge(age);
setAge() being:
public void setAge(int age) {
if (age >= 0) {
this.age = age;
} else {
this.age = 0;
}
}
So in Java (and other languages), the default is to use getters and setters anyway because they can be annoying to write but can spare you much time if you find yourself in the situation I've described.
However, you do not need to do it in Python since Python has properties. If you have this class:
class Person(object):
def __init__(self, name, age):
self.name = name
self.age = age
...and then you decide to validate ages, you do not need to change the person.age = age pieces of your code. Just add a property (as shown below)
class Person(object):
def __init__(self, name, age):
self.name = name
self._age = age if age >= 0 else 0
#property
def age(self):
return self._age
#age.setter
def age(self, age):
if age >= 0:
self._age = age
else:
self._age = 0
Suppose you can do it and still use person.age = age, why would you add private fields and getters and setters?
(Also, see Python is not Java and this article about the harms of using getters and setters.).
Everything is visible anyway - and trying to hide complicates your work
Even in languages with private attributes, you can access them through some reflection/introspection library. And people do it a lot, in frameworks and for solving urgent needs. The problem is that introspection libraries are just a complicated way of doing what you could do with public attributes.
Since Python is a very dynamic language, adding this burden to your classes is counterproductive.
The problem is not being possible to see - it is being required to see
For a Pythonista, encapsulation is not the inability to see the internals of classes but the possibility of avoiding looking at it. Encapsulation is the property of a component that the user can use without concerning about the internal details. If you can use a component without bothering yourself about its implementation, then it is encapsulated (in the opinion of a Python programmer).
Now, if you wrote a class you can use it without thinking about implementation details, there is no problem if you want to look inside the class for some reason. The point is: your API should be good, and the rest is details.
Guido said so
Well, this is not controversial: he said so, actually. (Look for "open kimono.")
This is culture
Yes, there are some reasons, but no critical reason. This is primarily a cultural aspect of programming in Python. Frankly, it could be the other way, too - but it is not. Also, you could just as easily ask the other way around: why do some languages use private attributes by default? For the same main reason as for the Python practice: because it is the culture of these languages, and each choice has advantages and disadvantages.
Since there already is this culture, you are well-advised to follow it. Otherwise, you will get annoyed by Python programmers telling you to remove the __ from your code when you ask a question in Stack Overflow :)
First - What is name mangling?
Name mangling is invoked when you are in a class definition and use __any_name or __any_name_, that is, two (or more) leading underscores and at most one trailing underscore.
class Demo:
__any_name = "__any_name"
__any_other_name_ = "__any_other_name_"
And now:
>>> [n for n in dir(Demo) if 'any' in n]
['_Demo__any_name', '_Demo__any_other_name_']
>>> Demo._Demo__any_name
'__any_name'
>>> Demo._Demo__any_other_name_
'__any_other_name_'
When in doubt, do what?
The ostensible use is to prevent subclassers from using an attribute that the class uses.
A potential value is in avoiding name collisions with subclassers who want to override behavior, so that the parent class functionality keeps working as expected. However, the example in the Python documentation is not Liskov substitutable, and no examples come to mind where I have found this useful.
The downsides are that it increases cognitive load for reading and understanding a code base, and especially so when debugging where you see the double underscore name in the source and a mangled name in the debugger.
My personal approach is to intentionally avoid it. I work on a very large code base. The rare uses of it stick out like a sore thumb and do not seem justified.
You do need to be aware of it so you know it when you see it.
PEP 8
PEP 8, the Python standard library style guide, currently says (abridged):
There is some controversy about the use of __names.
If your class is intended to be subclassed, and you have attributes that you do not want subclasses to use, consider naming them with double leading underscores and no trailing underscores.
Note that only the simple class name is used in the mangled name, so if a subclass chooses both the same class name and attribute name,
you can still get name collisions.
Name mangling can make certain uses, such as debugging and __getattr__() , less convenient. However the name mangling algorithm is well documented and easy to perform manually.
Not everyone likes name mangling. Try to balance the need to avoid accidental name clashes with potential use by advanced callers.
How does it work?
If you prepend two underscores (without ending double-underscores) in a class definition, the name will be mangled, and an underscore followed by the class name will be prepended on the object:
>>> class Foo(object):
... __foobar = None
... _foobaz = None
... __fooquux__ = None
...
>>> [name for name in dir(Foo) if 'foo' in name]
['_Foo__foobar', '__fooquux__', '_foobaz']
Note that names will only get mangled when the class definition is parsed:
>>> Foo.__test = None
>>> Foo.__test
>>> Foo._Foo__test
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'Foo' has no attribute '_Foo__test'
Also, those new to Python sometimes have trouble understanding what's going on when they can't manually access a name they see defined in a class definition. This is not a strong reason against it, but it's something to consider if you have a learning audience.
One Underscore?
If the convention is to use only one underscore, I'd also like to know the rationale.
When my intention is for users to keep their hands off an attribute, I tend to only use the one underscore, but that's because in my mental model, subclassers would have access to the name (which they always have, as they can easily spot the mangled name anyways).
If I were reviewing code that uses the __ prefix, I would ask why they're invoking name mangling, and if they couldn't do just as well with a single underscore, keeping in mind that if subclassers choose the same names for the class and class attribute there will be a name collision in spite of this.
I wouldn't say that practice produces better code. Visibility modifiers only distract you from the task at hand, and as a side effect force your interface to be used as you intended. Generally speaking, enforcing visibility prevents programmers from messing things up if they haven't read the documentation properly.
A far better solution is the route that Python encourages: Your classes and variables should be well documented, and their behaviour clear. The source should be available. This is far more extensible and reliable way to write code.
My strategy in Python is this:
Just write the damn thing, make no assumptions about how your data should be protected. This assumes that you write to create the ideal interfaces for your problems.
Use a leading underscore for stuff that probably won't be used externally, and isn't part of the normal "client code" interface.
Use double underscore only for things that are purely convenience inside the class, or will cause considerable damage if accidentally exposed.
Above all, it should be clear what everything does. Document it if someone else will be using it. Document it if you want it to be useful in a year's time.
As a side note, you should actually be going with protected in those other languages: You never know your class might be inherited later and for what it might be used. Best to only protect those variables that you are certain cannot or should not be used by foreign code.
You shouldn't start with private data and make it public as necessary. Rather, you should start by figuring out the interface of your object. I.e. you should start by figuring out what the world sees (the public stuff) and then figure out what private stuff is necessary for that to happen.
Other language make difficult to make private that which once was public. I.e. I'll break lots of code if I make my variable private or protected. But with properties in python this isn't the case. Rather, I can maintain the same interface even with rearranging the internal data.
The difference between _ and __ is that python actually makes an attempt to enforce the latter. Of course, it doesn't try really hard but it does make it difficult. Having _ merely tells other programmers what the intention is, they are free to ignore at their peril. But ignoring that rule is sometimes helpful. Examples include debugging, temporary hacks, and working with third party code that wasn't intended to be used the way you use it.
There are already a lot of good answers to this, but I'm going to offer another one. This is also partially a response to people who keep saying that double underscore isn't private (it really is).
If you look at Java/C#, both of them have private/protected/public. All of these are compile-time constructs. They are only enforced at the time of compilation. If you were to use reflection in Java/C#, you could easily access private method.
Now every time you call a function in Python, you are inherently using reflection. These pieces of code are the same in Python.
lst = []
lst.append(1)
getattr(lst, 'append')(1)
The "dot" syntax is only syntactic sugar for the latter piece of code. Mostly because using getattr is already ugly with only one function call. It just gets worse from there.
So with that, there can't be a Java/C# version of private, as Python doesn't compile the code. Java and C# can't check if a function is private or public at runtime, as that information is gone (and it has no knowledge of where the function is being called from).
Now with that information, the name mangling of the double underscore makes the most sense for achieving "private-ness". Now when a function is called from the 'self' instance and it notices that it starts with '__', it just performs the name mangling right there. It's just more syntactic sugar. That syntactic sugar allows the equivalent of 'private' in a language that only uses reflection for data member access.
Disclaimer: I have never heard anybody from the Python development say anything like this. The real reason for the lack of "private" is cultural, but you'll also notice that most scripting/interpreted languages have no private. A strictly enforceable private is not practical at anything except for compile time.
First: Why do you want to hide your data? Why is that so important?
Most of the time you don't really want to do it but you do because others are doing.
If you really really really don't want people using something, add one underscore in front of it. That's it... Pythonistas know that things with one underscore is not guaranteed to work every time and may change without you knowing.
That's the way we live and we're okay with that.
Using two underscores will make your class so bad to subclass that even you will not want to work that way.
The chosen answer does a good job of explaining how properties remove the need for private attributes, but I would also add that functions at the module level remove the need for private methods.
If you turn a method into a function at the module level, you remove the opportunity for subclasses to override it. Moving some functionality to the module level is more Pythonic than trying to hide methods with name mangling.
Following code snippet will explain all different cases :
two leading underscores (__a)
single leading underscore (_a)
no underscore (a)
class Test:
def __init__(self):
self.__a = 'test1'
self._a = 'test2'
self.a = 'test3'
def change_value(self,value):
self.__a = value
return self.__a
printing all valid attributes of Test Object
testObj1 = Test()
valid_attributes = dir(testObj1)
print valid_attributes
['_Test__a', '__doc__', '__init__', '__module__', '_a', 'a',
'change_value']
Here, you can see that name of __a has been changed to _Test__a to prevent this variable to be overridden by any of the subclass. This concept is known as "Name Mangling" in python.
You can access this like this :
testObj2 = Test()
print testObj2._Test__a
test1
Similarly, in case of _a, the variable is just to notify the developer that it should be used as internal variable of that class, the python interpreter won't do anything even if you access it, but it is not a good practise.
testObj3 = Test()
print testObj3._a
test2
a variable can be accesses from anywhere it's like a public class variable.
testObj4 = Test()
print testObj4.a
test3
Hope the answer helped you :)
At first glance it should be the same as for other languages (under "other" I mean Java or C++), but it isn't.
In Java you made private all variables that shouldn't be accessible outside. In the same time in Python you can't achieve this since there is no "privateness" (as one of Python principles says - "We're all adults"). So double underscore means only "Guys, do not use this field directly". The same meaning has singe underscore, which in the same time doesn't cause any headache when you have to inherit from considered class (just an example of possible problem caused by double underscore).
So, I'd recommend you to use single underscore by default for "private" members.
"If in doubt about whether a variable should be private or protected, it's better to go with private." - yes, same holds in Python.
Some answers here say about 'conventions', but don't give the links to those conventions. The authoritative guide for Python, PEP 8 states explicitly:
If in doubt, choose non-public; it's easier to make it public later than to make a public attribute non-public.
The distinction between public and private, and name mangling in Python have been considered in other answers. From the same link,
We don't use the term "private" here, since no attribute is really private in Python (without a generally unnecessary amount of work).
#EXAMPLE PROGRAM FOR Python name mangling
class Demo:
__any_name = "__any_name"
__any_other_name_ = "__any_other_name_"
[n for n in dir(Demo) if 'any' in n] # GIVES OUTPUT AS ['_Demo__any_name',
# '_Demo__any_other_name_']

__getattribute__() method for private attributes [duplicate]

In other languages, a general guideline that helps produce better code is always make everything as hidden as possible. If in doubt about whether a variable should be private or protected, it's better to go with private.
Does the same hold true for Python? Should I use two leading underscores on everything at first, and only make them less hidden (only one underscore) as I need them?
If the convention is to use only one underscore, I'd also like to know the rationale.
Here's a comment I left on JBernardo's answer. It explains why I asked this question and also why I'd like to know why Python is different from the other languages:
I come from languages that train you to think everything should be only as public as needed and no more. The reasoning is that this will reduce dependencies and make the code safer to alter. The Python way of doing things in reverse -- starting from public and going towards hidden -- is odd to me.
When in doubt, leave it "public" - I mean, do not add anything to obscure the name of your attribute. If you have a class with some internal value, do not bother about it. Instead of writing:
class Stack(object):
def __init__(self):
self.__storage = [] # Too uptight
def push(self, value):
self.__storage.append(value)
write this by default:
class Stack(object):
def __init__(self):
self.storage = [] # No mangling
def push(self, value):
self.storage.append(value)
This is for sure a controversial way of doing things. Python newbies hate it, and even some old Python guys despise this default - but it is the default anyway, so I recommend you to follow it, even if you feel uncomfortable.
If you really want to send the message "Can't touch this!" to your users, the usual way is to precede the variable with one underscore. This is just a convention, but people understand it and take double care when dealing with such stuff:
class Stack(object):
def __init__(self):
self._storage = [] # This is ok, but Pythonistas use it to be relaxed about it
def push(self, value):
self._storage.append(value)
This can be useful, too, for avoiding conflict between property names and attribute names:
class Person(object):
def __init__(self, name, age):
self.name = name
self._age = age if age >= 0 else 0
#property
def age(self):
return self._age
#age.setter
def age(self, age):
if age >= 0:
self._age = age
else:
self._age = 0
What about the double underscore? Well, we use the double underscore magic mainly to avoid accidental overloading of methods and name conflicts with superclasses' attributes. It can be pretty valuable if you write a class to be extended many times.
If you want to use it for other purposes, you can, but it is neither usual nor recommended.
EDIT: Why is this so? Well, the usual Python style does not emphasize making things private - on the contrary! There are many reasons for that - most of them controversial... Let us see some of them.
Python has properties
Today, most OO languages use the opposite approach: what should not be used should not be visible, so attributes should be private. Theoretically, this would yield more manageable, less coupled classes because no one would change the objects' values recklessly.
However, it is not so simple. For example, Java classes have many getters that only get the values and setters that only set the values. You need, let us say, seven lines of code to declare a single attribute - which a Python programmer would say is needlessly complex. Also, you write a lot of code to get one public field since you can change its value using the getters and setters in practice.
So why follow this private-by-default policy? Just make your attributes public by default. Of course, this is problematic in Java because if you decide to add some validation to your attribute, it would require you to change all:
person.age = age;
in your code to, let us say,
person.setAge(age);
setAge() being:
public void setAge(int age) {
if (age >= 0) {
this.age = age;
} else {
this.age = 0;
}
}
So in Java (and other languages), the default is to use getters and setters anyway because they can be annoying to write but can spare you much time if you find yourself in the situation I've described.
However, you do not need to do it in Python since Python has properties. If you have this class:
class Person(object):
def __init__(self, name, age):
self.name = name
self.age = age
...and then you decide to validate ages, you do not need to change the person.age = age pieces of your code. Just add a property (as shown below)
class Person(object):
def __init__(self, name, age):
self.name = name
self._age = age if age >= 0 else 0
#property
def age(self):
return self._age
#age.setter
def age(self, age):
if age >= 0:
self._age = age
else:
self._age = 0
Suppose you can do it and still use person.age = age, why would you add private fields and getters and setters?
(Also, see Python is not Java and this article about the harms of using getters and setters.).
Everything is visible anyway - and trying to hide complicates your work
Even in languages with private attributes, you can access them through some reflection/introspection library. And people do it a lot, in frameworks and for solving urgent needs. The problem is that introspection libraries are just a complicated way of doing what you could do with public attributes.
Since Python is a very dynamic language, adding this burden to your classes is counterproductive.
The problem is not being possible to see - it is being required to see
For a Pythonista, encapsulation is not the inability to see the internals of classes but the possibility of avoiding looking at it. Encapsulation is the property of a component that the user can use without concerning about the internal details. If you can use a component without bothering yourself about its implementation, then it is encapsulated (in the opinion of a Python programmer).
Now, if you wrote a class you can use it without thinking about implementation details, there is no problem if you want to look inside the class for some reason. The point is: your API should be good, and the rest is details.
Guido said so
Well, this is not controversial: he said so, actually. (Look for "open kimono.")
This is culture
Yes, there are some reasons, but no critical reason. This is primarily a cultural aspect of programming in Python. Frankly, it could be the other way, too - but it is not. Also, you could just as easily ask the other way around: why do some languages use private attributes by default? For the same main reason as for the Python practice: because it is the culture of these languages, and each choice has advantages and disadvantages.
Since there already is this culture, you are well-advised to follow it. Otherwise, you will get annoyed by Python programmers telling you to remove the __ from your code when you ask a question in Stack Overflow :)
First - What is name mangling?
Name mangling is invoked when you are in a class definition and use __any_name or __any_name_, that is, two (or more) leading underscores and at most one trailing underscore.
class Demo:
__any_name = "__any_name"
__any_other_name_ = "__any_other_name_"
And now:
>>> [n for n in dir(Demo) if 'any' in n]
['_Demo__any_name', '_Demo__any_other_name_']
>>> Demo._Demo__any_name
'__any_name'
>>> Demo._Demo__any_other_name_
'__any_other_name_'
When in doubt, do what?
The ostensible use is to prevent subclassers from using an attribute that the class uses.
A potential value is in avoiding name collisions with subclassers who want to override behavior, so that the parent class functionality keeps working as expected. However, the example in the Python documentation is not Liskov substitutable, and no examples come to mind where I have found this useful.
The downsides are that it increases cognitive load for reading and understanding a code base, and especially so when debugging where you see the double underscore name in the source and a mangled name in the debugger.
My personal approach is to intentionally avoid it. I work on a very large code base. The rare uses of it stick out like a sore thumb and do not seem justified.
You do need to be aware of it so you know it when you see it.
PEP 8
PEP 8, the Python standard library style guide, currently says (abridged):
There is some controversy about the use of __names.
If your class is intended to be subclassed, and you have attributes that you do not want subclasses to use, consider naming them with double leading underscores and no trailing underscores.
Note that only the simple class name is used in the mangled name, so if a subclass chooses both the same class name and attribute name,
you can still get name collisions.
Name mangling can make certain uses, such as debugging and __getattr__() , less convenient. However the name mangling algorithm is well documented and easy to perform manually.
Not everyone likes name mangling. Try to balance the need to avoid accidental name clashes with potential use by advanced callers.
How does it work?
If you prepend two underscores (without ending double-underscores) in a class definition, the name will be mangled, and an underscore followed by the class name will be prepended on the object:
>>> class Foo(object):
... __foobar = None
... _foobaz = None
... __fooquux__ = None
...
>>> [name for name in dir(Foo) if 'foo' in name]
['_Foo__foobar', '__fooquux__', '_foobaz']
Note that names will only get mangled when the class definition is parsed:
>>> Foo.__test = None
>>> Foo.__test
>>> Foo._Foo__test
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'Foo' has no attribute '_Foo__test'
Also, those new to Python sometimes have trouble understanding what's going on when they can't manually access a name they see defined in a class definition. This is not a strong reason against it, but it's something to consider if you have a learning audience.
One Underscore?
If the convention is to use only one underscore, I'd also like to know the rationale.
When my intention is for users to keep their hands off an attribute, I tend to only use the one underscore, but that's because in my mental model, subclassers would have access to the name (which they always have, as they can easily spot the mangled name anyways).
If I were reviewing code that uses the __ prefix, I would ask why they're invoking name mangling, and if they couldn't do just as well with a single underscore, keeping in mind that if subclassers choose the same names for the class and class attribute there will be a name collision in spite of this.
I wouldn't say that practice produces better code. Visibility modifiers only distract you from the task at hand, and as a side effect force your interface to be used as you intended. Generally speaking, enforcing visibility prevents programmers from messing things up if they haven't read the documentation properly.
A far better solution is the route that Python encourages: Your classes and variables should be well documented, and their behaviour clear. The source should be available. This is far more extensible and reliable way to write code.
My strategy in Python is this:
Just write the damn thing, make no assumptions about how your data should be protected. This assumes that you write to create the ideal interfaces for your problems.
Use a leading underscore for stuff that probably won't be used externally, and isn't part of the normal "client code" interface.
Use double underscore only for things that are purely convenience inside the class, or will cause considerable damage if accidentally exposed.
Above all, it should be clear what everything does. Document it if someone else will be using it. Document it if you want it to be useful in a year's time.
As a side note, you should actually be going with protected in those other languages: You never know your class might be inherited later and for what it might be used. Best to only protect those variables that you are certain cannot or should not be used by foreign code.
You shouldn't start with private data and make it public as necessary. Rather, you should start by figuring out the interface of your object. I.e. you should start by figuring out what the world sees (the public stuff) and then figure out what private stuff is necessary for that to happen.
Other language make difficult to make private that which once was public. I.e. I'll break lots of code if I make my variable private or protected. But with properties in python this isn't the case. Rather, I can maintain the same interface even with rearranging the internal data.
The difference between _ and __ is that python actually makes an attempt to enforce the latter. Of course, it doesn't try really hard but it does make it difficult. Having _ merely tells other programmers what the intention is, they are free to ignore at their peril. But ignoring that rule is sometimes helpful. Examples include debugging, temporary hacks, and working with third party code that wasn't intended to be used the way you use it.
There are already a lot of good answers to this, but I'm going to offer another one. This is also partially a response to people who keep saying that double underscore isn't private (it really is).
If you look at Java/C#, both of them have private/protected/public. All of these are compile-time constructs. They are only enforced at the time of compilation. If you were to use reflection in Java/C#, you could easily access private method.
Now every time you call a function in Python, you are inherently using reflection. These pieces of code are the same in Python.
lst = []
lst.append(1)
getattr(lst, 'append')(1)
The "dot" syntax is only syntactic sugar for the latter piece of code. Mostly because using getattr is already ugly with only one function call. It just gets worse from there.
So with that, there can't be a Java/C# version of private, as Python doesn't compile the code. Java and C# can't check if a function is private or public at runtime, as that information is gone (and it has no knowledge of where the function is being called from).
Now with that information, the name mangling of the double underscore makes the most sense for achieving "private-ness". Now when a function is called from the 'self' instance and it notices that it starts with '__', it just performs the name mangling right there. It's just more syntactic sugar. That syntactic sugar allows the equivalent of 'private' in a language that only uses reflection for data member access.
Disclaimer: I have never heard anybody from the Python development say anything like this. The real reason for the lack of "private" is cultural, but you'll also notice that most scripting/interpreted languages have no private. A strictly enforceable private is not practical at anything except for compile time.
First: Why do you want to hide your data? Why is that so important?
Most of the time you don't really want to do it but you do because others are doing.
If you really really really don't want people using something, add one underscore in front of it. That's it... Pythonistas know that things with one underscore is not guaranteed to work every time and may change without you knowing.
That's the way we live and we're okay with that.
Using two underscores will make your class so bad to subclass that even you will not want to work that way.
The chosen answer does a good job of explaining how properties remove the need for private attributes, but I would also add that functions at the module level remove the need for private methods.
If you turn a method into a function at the module level, you remove the opportunity for subclasses to override it. Moving some functionality to the module level is more Pythonic than trying to hide methods with name mangling.
Following code snippet will explain all different cases :
two leading underscores (__a)
single leading underscore (_a)
no underscore (a)
class Test:
def __init__(self):
self.__a = 'test1'
self._a = 'test2'
self.a = 'test3'
def change_value(self,value):
self.__a = value
return self.__a
printing all valid attributes of Test Object
testObj1 = Test()
valid_attributes = dir(testObj1)
print valid_attributes
['_Test__a', '__doc__', '__init__', '__module__', '_a', 'a',
'change_value']
Here, you can see that name of __a has been changed to _Test__a to prevent this variable to be overridden by any of the subclass. This concept is known as "Name Mangling" in python.
You can access this like this :
testObj2 = Test()
print testObj2._Test__a
test1
Similarly, in case of _a, the variable is just to notify the developer that it should be used as internal variable of that class, the python interpreter won't do anything even if you access it, but it is not a good practise.
testObj3 = Test()
print testObj3._a
test2
a variable can be accesses from anywhere it's like a public class variable.
testObj4 = Test()
print testObj4.a
test3
Hope the answer helped you :)
At first glance it should be the same as for other languages (under "other" I mean Java or C++), but it isn't.
In Java you made private all variables that shouldn't be accessible outside. In the same time in Python you can't achieve this since there is no "privateness" (as one of Python principles says - "We're all adults"). So double underscore means only "Guys, do not use this field directly". The same meaning has singe underscore, which in the same time doesn't cause any headache when you have to inherit from considered class (just an example of possible problem caused by double underscore).
So, I'd recommend you to use single underscore by default for "private" members.
"If in doubt about whether a variable should be private or protected, it's better to go with private." - yes, same holds in Python.
Some answers here say about 'conventions', but don't give the links to those conventions. The authoritative guide for Python, PEP 8 states explicitly:
If in doubt, choose non-public; it's easier to make it public later than to make a public attribute non-public.
The distinction between public and private, and name mangling in Python have been considered in other answers. From the same link,
We don't use the term "private" here, since no attribute is really private in Python (without a generally unnecessary amount of work).
#EXAMPLE PROGRAM FOR Python name mangling
class Demo:
__any_name = "__any_name"
__any_other_name_ = "__any_other_name_"
[n for n in dir(Demo) if 'any' in n] # GIVES OUTPUT AS ['_Demo__any_name',
# '_Demo__any_other_name_']

PEP8, locals() and interpolation

Here is some code:
foo = "Bears"
"Lions, Tigers and %(foo)s" % locals()
My PEP8 linter (SublimeLinter) complains about this, because foo is "unreferenced". My question is whether PEP8 should count this type of string interpolation as "referenced", or if there is a good reason to consider this "bad style".
Well, it isn't referenced. The part that's questionable style is using locals() to access variables instead of just accessing them by name. See this previous question for why that's a dubious idea. It's not a terrible thing, but it's not good style for a program that you want to maintain in the long term.
Edit: It's true that when you use a literal format string, it seems more explicit. But part of the point of the previous post is that in a larger program, you will probably wind up not using a literal format string. If it's a small program and you don't care, go ahead and use it. But warning about things that are likely to cause maintainability problems later is also part of what style guides and linters are for.
Also, locals isn't a canonical representation of names that are explicitly referenced in the literal. It's a canonical representation of all names in the local namespace. You can still do it if you like, but it's basically a loose/sloppy alternative to explicitly using the names you're using, which is again, exactly the sort of thing linters are supposed to warn you about.
Even if you reject BrenBarn's argument that foo isn't referenced, if you accept the argument that passing locals() in string formatting should be flagged, it may not be worth writing to code to consider foo referenced.
First, in every case where that extra code would help, the construct is not acceptable anyway, and the user is going to have to ignore a lint warning anyway. Yes, there is some harm in giving the user two lint warnings to ignore when there's only actually one problem, especially if one of the warnings is somewhat misleading. But is it enough harm to justify writing very complicated code and introduce new bugs into the linter?
You also have to consider that for this to actually work, the linter has to recognize not just % formatting, but also {} formatting, and every other kind of string formatting, HTML templating, etc. that the user could be using. In practice, this means handling various very common forms, and providing some kind of hook for the user to describe anything else.
And, on top of that, even if you don't think it should work with arbitrarily-generated format strings, it surely has to at least work with l10n. How is that going to work? If the format string is generated by something like gettext, the linter has no way of knowing whether foo is referenced, unless it can check all of the translations and see that at least one of them references foo—which means it has to understand (or have hooks to be taught) every string translation mechanism, and have access to the translation database.
So, I would suggest that, even if you consider the warning spurious in this case, you leave it there anyway. At most, add something which qualifies the warning:
foo is possibly unreferenced, but in a function that uses locals()
The following wouldn't make SublimeLinter happy either, It looks up each variable name referenced in the string and substitutes the corresponding value from the namespace mapping, which defaults to the caller's locals. As such it show the inherent limitation a utility like SublimeLinter has when trying to determine if something has been referenced in Python). My advice is just ignore SublimeLinter or add code to fake it out, like foo = foo. I've had to do something like the latter to get rid of C compiler warnings about things which were both legal and intended.
import re
import sys
SUB_RE = re.compile(r"%\((.*?)\)s")
def local_vars_subst(s, namespace=None):
if namespace is None:
namespace = sys._getframe(1).f_locals
def repl(matchobj):
var = matchobj.group(1).strip()
try:
retval = namespace[var]
except KeyError:
retval = "<undefined>"
return retval
return SUB_RE.sub(repl, s)
foo = "Bears"
print local_vars_subst("Lions, Tigers and %(foo)s")

Should I use name mangling in Python?

In other languages, a general guideline that helps produce better code is always make everything as hidden as possible. If in doubt about whether a variable should be private or protected, it's better to go with private.
Does the same hold true for Python? Should I use two leading underscores on everything at first, and only make them less hidden (only one underscore) as I need them?
If the convention is to use only one underscore, I'd also like to know the rationale.
Here's a comment I left on JBernardo's answer. It explains why I asked this question and also why I'd like to know why Python is different from the other languages:
I come from languages that train you to think everything should be only as public as needed and no more. The reasoning is that this will reduce dependencies and make the code safer to alter. The Python way of doing things in reverse -- starting from public and going towards hidden -- is odd to me.
When in doubt, leave it "public" - I mean, do not add anything to obscure the name of your attribute. If you have a class with some internal value, do not bother about it. Instead of writing:
class Stack(object):
def __init__(self):
self.__storage = [] # Too uptight
def push(self, value):
self.__storage.append(value)
write this by default:
class Stack(object):
def __init__(self):
self.storage = [] # No mangling
def push(self, value):
self.storage.append(value)
This is for sure a controversial way of doing things. Python newbies hate it, and even some old Python guys despise this default - but it is the default anyway, so I recommend you to follow it, even if you feel uncomfortable.
If you really want to send the message "Can't touch this!" to your users, the usual way is to precede the variable with one underscore. This is just a convention, but people understand it and take double care when dealing with such stuff:
class Stack(object):
def __init__(self):
self._storage = [] # This is ok, but Pythonistas use it to be relaxed about it
def push(self, value):
self._storage.append(value)
This can be useful, too, for avoiding conflict between property names and attribute names:
class Person(object):
def __init__(self, name, age):
self.name = name
self._age = age if age >= 0 else 0
#property
def age(self):
return self._age
#age.setter
def age(self, age):
if age >= 0:
self._age = age
else:
self._age = 0
What about the double underscore? Well, we use the double underscore magic mainly to avoid accidental overloading of methods and name conflicts with superclasses' attributes. It can be pretty valuable if you write a class to be extended many times.
If you want to use it for other purposes, you can, but it is neither usual nor recommended.
EDIT: Why is this so? Well, the usual Python style does not emphasize making things private - on the contrary! There are many reasons for that - most of them controversial... Let us see some of them.
Python has properties
Today, most OO languages use the opposite approach: what should not be used should not be visible, so attributes should be private. Theoretically, this would yield more manageable, less coupled classes because no one would change the objects' values recklessly.
However, it is not so simple. For example, Java classes have many getters that only get the values and setters that only set the values. You need, let us say, seven lines of code to declare a single attribute - which a Python programmer would say is needlessly complex. Also, you write a lot of code to get one public field since you can change its value using the getters and setters in practice.
So why follow this private-by-default policy? Just make your attributes public by default. Of course, this is problematic in Java because if you decide to add some validation to your attribute, it would require you to change all:
person.age = age;
in your code to, let us say,
person.setAge(age);
setAge() being:
public void setAge(int age) {
if (age >= 0) {
this.age = age;
} else {
this.age = 0;
}
}
So in Java (and other languages), the default is to use getters and setters anyway because they can be annoying to write but can spare you much time if you find yourself in the situation I've described.
However, you do not need to do it in Python since Python has properties. If you have this class:
class Person(object):
def __init__(self, name, age):
self.name = name
self.age = age
...and then you decide to validate ages, you do not need to change the person.age = age pieces of your code. Just add a property (as shown below)
class Person(object):
def __init__(self, name, age):
self.name = name
self._age = age if age >= 0 else 0
#property
def age(self):
return self._age
#age.setter
def age(self, age):
if age >= 0:
self._age = age
else:
self._age = 0
Suppose you can do it and still use person.age = age, why would you add private fields and getters and setters?
(Also, see Python is not Java and this article about the harms of using getters and setters.).
Everything is visible anyway - and trying to hide complicates your work
Even in languages with private attributes, you can access them through some reflection/introspection library. And people do it a lot, in frameworks and for solving urgent needs. The problem is that introspection libraries are just a complicated way of doing what you could do with public attributes.
Since Python is a very dynamic language, adding this burden to your classes is counterproductive.
The problem is not being possible to see - it is being required to see
For a Pythonista, encapsulation is not the inability to see the internals of classes but the possibility of avoiding looking at it. Encapsulation is the property of a component that the user can use without concerning about the internal details. If you can use a component without bothering yourself about its implementation, then it is encapsulated (in the opinion of a Python programmer).
Now, if you wrote a class you can use it without thinking about implementation details, there is no problem if you want to look inside the class for some reason. The point is: your API should be good, and the rest is details.
Guido said so
Well, this is not controversial: he said so, actually. (Look for "open kimono.")
This is culture
Yes, there are some reasons, but no critical reason. This is primarily a cultural aspect of programming in Python. Frankly, it could be the other way, too - but it is not. Also, you could just as easily ask the other way around: why do some languages use private attributes by default? For the same main reason as for the Python practice: because it is the culture of these languages, and each choice has advantages and disadvantages.
Since there already is this culture, you are well-advised to follow it. Otherwise, you will get annoyed by Python programmers telling you to remove the __ from your code when you ask a question in Stack Overflow :)
First - What is name mangling?
Name mangling is invoked when you are in a class definition and use __any_name or __any_name_, that is, two (or more) leading underscores and at most one trailing underscore.
class Demo:
__any_name = "__any_name"
__any_other_name_ = "__any_other_name_"
And now:
>>> [n for n in dir(Demo) if 'any' in n]
['_Demo__any_name', '_Demo__any_other_name_']
>>> Demo._Demo__any_name
'__any_name'
>>> Demo._Demo__any_other_name_
'__any_other_name_'
When in doubt, do what?
The ostensible use is to prevent subclassers from using an attribute that the class uses.
A potential value is in avoiding name collisions with subclassers who want to override behavior, so that the parent class functionality keeps working as expected. However, the example in the Python documentation is not Liskov substitutable, and no examples come to mind where I have found this useful.
The downsides are that it increases cognitive load for reading and understanding a code base, and especially so when debugging where you see the double underscore name in the source and a mangled name in the debugger.
My personal approach is to intentionally avoid it. I work on a very large code base. The rare uses of it stick out like a sore thumb and do not seem justified.
You do need to be aware of it so you know it when you see it.
PEP 8
PEP 8, the Python standard library style guide, currently says (abridged):
There is some controversy about the use of __names.
If your class is intended to be subclassed, and you have attributes that you do not want subclasses to use, consider naming them with double leading underscores and no trailing underscores.
Note that only the simple class name is used in the mangled name, so if a subclass chooses both the same class name and attribute name,
you can still get name collisions.
Name mangling can make certain uses, such as debugging and __getattr__() , less convenient. However the name mangling algorithm is well documented and easy to perform manually.
Not everyone likes name mangling. Try to balance the need to avoid accidental name clashes with potential use by advanced callers.
How does it work?
If you prepend two underscores (without ending double-underscores) in a class definition, the name will be mangled, and an underscore followed by the class name will be prepended on the object:
>>> class Foo(object):
... __foobar = None
... _foobaz = None
... __fooquux__ = None
...
>>> [name for name in dir(Foo) if 'foo' in name]
['_Foo__foobar', '__fooquux__', '_foobaz']
Note that names will only get mangled when the class definition is parsed:
>>> Foo.__test = None
>>> Foo.__test
>>> Foo._Foo__test
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'Foo' has no attribute '_Foo__test'
Also, those new to Python sometimes have trouble understanding what's going on when they can't manually access a name they see defined in a class definition. This is not a strong reason against it, but it's something to consider if you have a learning audience.
One Underscore?
If the convention is to use only one underscore, I'd also like to know the rationale.
When my intention is for users to keep their hands off an attribute, I tend to only use the one underscore, but that's because in my mental model, subclassers would have access to the name (which they always have, as they can easily spot the mangled name anyways).
If I were reviewing code that uses the __ prefix, I would ask why they're invoking name mangling, and if they couldn't do just as well with a single underscore, keeping in mind that if subclassers choose the same names for the class and class attribute there will be a name collision in spite of this.
I wouldn't say that practice produces better code. Visibility modifiers only distract you from the task at hand, and as a side effect force your interface to be used as you intended. Generally speaking, enforcing visibility prevents programmers from messing things up if they haven't read the documentation properly.
A far better solution is the route that Python encourages: Your classes and variables should be well documented, and their behaviour clear. The source should be available. This is far more extensible and reliable way to write code.
My strategy in Python is this:
Just write the damn thing, make no assumptions about how your data should be protected. This assumes that you write to create the ideal interfaces for your problems.
Use a leading underscore for stuff that probably won't be used externally, and isn't part of the normal "client code" interface.
Use double underscore only for things that are purely convenience inside the class, or will cause considerable damage if accidentally exposed.
Above all, it should be clear what everything does. Document it if someone else will be using it. Document it if you want it to be useful in a year's time.
As a side note, you should actually be going with protected in those other languages: You never know your class might be inherited later and for what it might be used. Best to only protect those variables that you are certain cannot or should not be used by foreign code.
You shouldn't start with private data and make it public as necessary. Rather, you should start by figuring out the interface of your object. I.e. you should start by figuring out what the world sees (the public stuff) and then figure out what private stuff is necessary for that to happen.
Other language make difficult to make private that which once was public. I.e. I'll break lots of code if I make my variable private or protected. But with properties in python this isn't the case. Rather, I can maintain the same interface even with rearranging the internal data.
The difference between _ and __ is that python actually makes an attempt to enforce the latter. Of course, it doesn't try really hard but it does make it difficult. Having _ merely tells other programmers what the intention is, they are free to ignore at their peril. But ignoring that rule is sometimes helpful. Examples include debugging, temporary hacks, and working with third party code that wasn't intended to be used the way you use it.
There are already a lot of good answers to this, but I'm going to offer another one. This is also partially a response to people who keep saying that double underscore isn't private (it really is).
If you look at Java/C#, both of them have private/protected/public. All of these are compile-time constructs. They are only enforced at the time of compilation. If you were to use reflection in Java/C#, you could easily access private method.
Now every time you call a function in Python, you are inherently using reflection. These pieces of code are the same in Python.
lst = []
lst.append(1)
getattr(lst, 'append')(1)
The "dot" syntax is only syntactic sugar for the latter piece of code. Mostly because using getattr is already ugly with only one function call. It just gets worse from there.
So with that, there can't be a Java/C# version of private, as Python doesn't compile the code. Java and C# can't check if a function is private or public at runtime, as that information is gone (and it has no knowledge of where the function is being called from).
Now with that information, the name mangling of the double underscore makes the most sense for achieving "private-ness". Now when a function is called from the 'self' instance and it notices that it starts with '__', it just performs the name mangling right there. It's just more syntactic sugar. That syntactic sugar allows the equivalent of 'private' in a language that only uses reflection for data member access.
Disclaimer: I have never heard anybody from the Python development say anything like this. The real reason for the lack of "private" is cultural, but you'll also notice that most scripting/interpreted languages have no private. A strictly enforceable private is not practical at anything except for compile time.
First: Why do you want to hide your data? Why is that so important?
Most of the time you don't really want to do it but you do because others are doing.
If you really really really don't want people using something, add one underscore in front of it. That's it... Pythonistas know that things with one underscore is not guaranteed to work every time and may change without you knowing.
That's the way we live and we're okay with that.
Using two underscores will make your class so bad to subclass that even you will not want to work that way.
The chosen answer does a good job of explaining how properties remove the need for private attributes, but I would also add that functions at the module level remove the need for private methods.
If you turn a method into a function at the module level, you remove the opportunity for subclasses to override it. Moving some functionality to the module level is more Pythonic than trying to hide methods with name mangling.
Following code snippet will explain all different cases :
two leading underscores (__a)
single leading underscore (_a)
no underscore (a)
class Test:
def __init__(self):
self.__a = 'test1'
self._a = 'test2'
self.a = 'test3'
def change_value(self,value):
self.__a = value
return self.__a
printing all valid attributes of Test Object
testObj1 = Test()
valid_attributes = dir(testObj1)
print valid_attributes
['_Test__a', '__doc__', '__init__', '__module__', '_a', 'a',
'change_value']
Here, you can see that name of __a has been changed to _Test__a to prevent this variable to be overridden by any of the subclass. This concept is known as "Name Mangling" in python.
You can access this like this :
testObj2 = Test()
print testObj2._Test__a
test1
Similarly, in case of _a, the variable is just to notify the developer that it should be used as internal variable of that class, the python interpreter won't do anything even if you access it, but it is not a good practise.
testObj3 = Test()
print testObj3._a
test2
a variable can be accesses from anywhere it's like a public class variable.
testObj4 = Test()
print testObj4.a
test3
Hope the answer helped you :)
At first glance it should be the same as for other languages (under "other" I mean Java or C++), but it isn't.
In Java you made private all variables that shouldn't be accessible outside. In the same time in Python you can't achieve this since there is no "privateness" (as one of Python principles says - "We're all adults"). So double underscore means only "Guys, do not use this field directly". The same meaning has singe underscore, which in the same time doesn't cause any headache when you have to inherit from considered class (just an example of possible problem caused by double underscore).
So, I'd recommend you to use single underscore by default for "private" members.
"If in doubt about whether a variable should be private or protected, it's better to go with private." - yes, same holds in Python.
Some answers here say about 'conventions', but don't give the links to those conventions. The authoritative guide for Python, PEP 8 states explicitly:
If in doubt, choose non-public; it's easier to make it public later than to make a public attribute non-public.
The distinction between public and private, and name mangling in Python have been considered in other answers. From the same link,
We don't use the term "private" here, since no attribute is really private in Python (without a generally unnecessary amount of work).
#EXAMPLE PROGRAM FOR Python name mangling
class Demo:
__any_name = "__any_name"
__any_other_name_ = "__any_other_name_"
[n for n in dir(Demo) if 'any' in n] # GIVES OUTPUT AS ['_Demo__any_name',
# '_Demo__any_other_name_']

Python 3 and static typing

I didn't really pay as much attention to Python 3's development as I would have liked, and only just noticed some interesting new syntax changes. Specifically from this SO answer function parameter annotation:
def digits(x:'nonnegative number') -> "yields number's digits":
# ...
Not knowing anything about this, I thought it could maybe be used for implementing static typing in Python!
After some searching, there seemed to be a lot discussion regarding (entirely optional) static typing in Python, such as that mentioned in PEP 3107, and "Adding Optional Static Typing to Python" (and part 2)
..but, I'm not clear how far this has progressed. Are there any implementations of static typing, using the parameter-annotation? Did any of the parameterised-type ideas make it into Python 3?
Thanks for reading my code!
Indeed, it's not hard to create a generic annotation enforcer in Python. Here's my take:
'''Very simple enforcer of type annotations.
This toy super-decorator can decorate all functions in a given module that have
annotations so that the type of input and output is enforced; an AssertionError is
raised on mismatch.
This module also has a test function func() which should fail and logging facility
log which defaults to print.
Since this is a test module, I cut corners by only checking *keyword* arguments.
'''
import sys
log = print
def func(x:'int' = 0) -> 'str':
'''An example function that fails type checking.'''
return x
# For simplicity, I only do keyword args.
def check_type(*args):
param, value, assert_type = args
log('Checking {0} = {1} of {2}.'.format(*args))
if not isinstance(value, assert_type):
raise AssertionError(
'Check failed - parameter {0} = {1} not {2}.'
.format(*args))
return value
def decorate_func(func):
def newf(*args, **kwargs):
for k, v in kwargs.items():
check_type(k, v, ann[k])
return check_type('<return_value>', func(*args, **kwargs), ann['return'])
ann = {k: eval(v) for k, v in func.__annotations__.items()}
newf.__doc__ = func.__doc__
newf.__type_checked = True
return newf
def decorate_module(module = '__main__'):
'''Enforces type from annotation for all functions in module.'''
d = sys.modules[module].__dict__
for k, f in d.items():
if getattr(f, '__annotations__', {}) and not getattr(f, '__type_checked', False):
log('Decorated {0!r}.'.format(f.__name__))
d[k] = decorate_func(f)
if __name__ == '__main__':
decorate_module()
# This will raise AssertionError.
func(x = 5)
Given this simplicity, it's strange at the first sight that this thing is not mainstream. However, I believe there are good reasons why it's not as useful as it might seem. Generally, type checking helps because if you add integer and dictionary, chances are you made some obvious mistake (and if you meant something reasonable, it's still better to be explicit than implicit).
But in real life you often mix quantities of the same computer type as seen by compiler but clearly different human type, for example the following snippet contains an obvious mistake:
height = 1.75 # Bob's height in meters.
length = len(sys.modules) # Number of modules imported by program.
area = height * length # What's that supposed to mean???
Any human should immediately see a mistake in the above line provided it knows the 'human type' of variables height and length even though it looks to computer as perfectly legal multiplication of int and float.
There's more that can be said about possible solutions to this problem, but enforcing 'computer types' is apparently a half-solution, so, at least in my opinion, it's worse than no solution at all. It's the same reason why Systems Hungarian is a terrible idea while Apps Hungarian is a great one. There's more at the very informative post of Joel Spolsky.
Now if somebody was to implement some kind of Pythonic third-party library that would automatically assign to real-world data its human type and then took care to transform that type like width * height -> area and enforce that check with function annotations, I think that would be a type checking people could really use!
As mentioned in that PEP, static type checking is one of the possible applications that function annotations can be used for, but they're leaving it up to third-party libraries to decide how to do it. That is, there isn't going to be an official implementation in core python.
As far as third-party implementations are concerned, there are some snippets (such as http://code.activestate.com/recipes/572161/), which seem to do the job pretty well.
EDIT:
As a note, I want to mention that checking behavior is preferable to checking type, therefore I think static typechecking is not so great an idea. My answer above is aimed at answering the question, not because I would do typechecking myself in such a way.
"Static typing" in Python can only be implemented so that the type checking is done in run-time, which means it slows down the application. Therefore you don't want that as a generality. Instead you want some of your methods to check it's inputs. This can be easily done with plain asserts, or with decorators if you (mistakenly) think you need it a lot.
There is also an alternative to static type checking, and that is to use an aspect oriented component architecture like The Zope Component Architecture. Instead of checking the type, you adapt it. So instead of:
assert isinstance(theobject, myclass)
you do this:
theobject = IMyClass(theobject)
If theobject already implements IMyClass nothing happens. If it doesn't, an adapter that wraps whatever theobject is to IMyClass will be looked up, and used instead of theobject. If no adapter is found, you get an error.
This combined the dynamicism of Python with the desire to have a specific type in a specific way.
This is not an answer to question directly, but I found out a Python fork that adds static typing: mypy-lang.org, of course one can't rely on it as it's still small endeavor, but interesting.
Sure, static typing seems a bit "unpythonic" and I don't use it all the time. But there are cases (e.g. nested classes, as in domain specific language parsing) where it can really speed up your development.
Then I prefer using beartype explained in this post*. It comes with a git repo, tests and an explanation what it can and what it can't do ... and I like the name ;)
* Please don't pay attention to Cecil's rant about why Python doesn't come with batteries included in this case.

Categories