I tried to understand how Zope interface work. I know Interface is just an instance of InterfaceClass which is just an ordinary Class. But if Interface is just a class instance, why it can be used as a base class to be inherited from?
e.g.
Class IFoo(Interface):
pass
Could you give me some insights? Thank you.
Python is inherently flexible, and any object can be a base class as long as it looks like a base class. As is always the case with Python, that means implementing some attributes that are expected to be found on a Python classes.
The Interface class (or it's bases Specification and Element) sets several. Look for any variables set starting with a double underscore (__) to gain an understanding:
__module__: A string containing the python path module.
__name__: The name under which the class was defined.
__bases__: The base classes of this class.
__doc__: (optional) The docstring of the class.
In addition, the InterfaceClass __init__ method will be called when used as a base class; Python basically treats base classes as metaclasses, and a new instance of the base class's class (metaclass) will be created whenever we use it in a class definition. This means that the __init__ method will be passed the new __name__ and __bases__ values, as well as all the new class attributes as keyword arguments (including __module__ and an optional __doc__).
This is all documented in the Standard type hierarchy section of the Python Data Model document (look for the 'classes' paragraph on special attributes), and in the same document, in the Customizing class creation section (base classes with a __class__ attribute are deemed a type).
So, any python instance that defines at least __module__, __name__ and __bases__ attributes, and a suitable __init__ method will work as a base class for other classes. Python does the rest.
Related
Why must instance variables be defined inside of methods? In other words why must self only be used to define new variables inside of methods in a class. Why can't you define variables using self as part of the class, but outside of methods.
"Instance variables are those variables for which each class object has it's own copy of it" - this definition doesn't say anything about methods. So, given that the definition doesn't mention methods why can't I define an instance variable (in other words use self to define a new variable) inside of a class, but outside of a method?
Python requires the object reference (implicit or explicit this in Java, for example) to be explicit. Inside methods -- bound functions -- the first param in the function definition is the instance. (This is conventionally called self but you can use any name.)
If you define
class C:
x = 1
there is no self reference, unlike, e.g. Java, where this is implicit.
Because the mechanism which Python uses to deal with OOP are very simple. There's no special syntax to define classes really, the class keyword is a very thin layer over what amounts to creating a dict. Everything you define inside a class Foo: block basically ends up as the contents of Foo.__dict__. So there's no syntax to define attributes of the instance resulting from calling Foo(). You add instance attributes simply by attaching them to the object you get from calling Foo(), which is self in __init__ or other instance methods.
For that to answer you need to know a little bit how the Python interpreter works.
In general every class and method definition are separate objects.
What you do when calling a method is that you pass the class instance as first parameter to the method. With that the method knows on what instance it is running on (and therefore where to allocate instance variables to).
This however only counts for instance methods.
Of course you can also create classmethods with #classmethod these take the class type as argument instead of an instance and can therefore not be used to create variables on the self context.
Why must instance variables be defined inside of methods?
They don't. You can define them from anywhere, as long as you have an instance (of a mutable type):
class Foo(object):
pass
f = Foo()
f.bar = 42
print(f.bar)
In other words why must self only be used to define new variables inside of methods in a class. Why can't you define variables using self as part of the class, but outside of methods.
self (which is only a naming convention, there's absolutely nothing magical here) is used to represent the current instance. How could you use it at the class block's top-level where you don't have any instance at all (and not even the class itself FWIW) ?
Defining the class "members" at the class top-level is mostly a static languages thing, where "objects" are mainly (technically) structs (C style structs, or Pascal style records if you prefer) with a statically defined memory structure.
Python is a dynamic language, which instead uses dicts as supporting data structure, so someobj.attribute is usually (minus computed attributes etc) resolved as someobj.__dict__["attribute"] (and someobj.attribute = value as someobj.__dict__["attribute"] = value).
So 1/ it doesn't NEED to have a fixed, explicitely defined data structure, and 2/ yet it DOES need to have an instance at end to set an attribute on it.
Note that you can force a class to use a fixed memory structure (instead of a plain dict) using slots, but you will still need to set the values from within a method (canonically the __init__, which exists for this very reason: initializing the instance's attributes).
I have a problem understanding some concepts of data structures in Python, in the following code.
class Stack(object): #1
def __init__(self): #2
self.items=[]
def isEmpty(self):
return self.items ==[]
def push(self,item):
self.items.append(item)
def pop(self):
self.items.pop()
def peak(self):
return self.items[len(self.items)-1]
def size(self):
return len(self.items)
s = Stack()
s.push(3)
s.push(7)
print(s.peak())
print (s.size())
s.pop()
print (s.size())
print (s.isEmpty())
I don't understand what is this object argument
I replaced it with (obj) and it generated an error, why?
I tried to remove it and it worked perfectly, why?
Why do I have __init__ to set a constructor?
self is an argument, but how does it get passed? and which object does it represent, the class it self?
Thanks.
object is a class, from which class Stack inherits. There is no
class obj, hence error. However, you can define a class that does
not inherit from anything (at least, in Python 2).
self represents an object on which the method is called; for
example when you do s.pop(), self inside method pop refers to
the same object as s - it is not a class, it is an instance of the class.
1
object here is the class your new class inherits from. There is already a base class named object, but there is no class named obj which is why replacing object with obj would cause an error. Anyway in your example code it is not needed at all since all classes in python 3 implicitly extends the object class.
2
__init__ is the constructor of the object and self there represents the object that you are creating itself, not the class, just like in the other methods you made.
Point 1:
Some history required here... Originally Python had two distinct kind of types, those implemented in C (whether in the stdlib or C extensions) and those implemented in Python with the class statement. Python 2.2 introduced a new object model (known as "new-style classes") to unify both, but kept the "classic" (aka "old-style") model for compatibility. This new model also introduced quite a lot of goodies like support for computed attributes, cooperative super calls via the super() object, metaclasses etc, all of which coming from the builtin object base class.
So in Python 2.2.x to 2.7.x, you can either create a new-style class by inheriting from object (or any subclass of object) or an old-style one by not inheriting from object (nor - obviously - any subclass of object).
In Python 2.7., since your example Stack class does not use any feature of the new object model, it works as well as an 'old-style' or as a 'new-style' class, but try to add a custom metaclass or a computed attribute and it will break in one way or another.
Python 3 totally removed old-style classes support and object is the defaut base class if you dont explicitely specify one, so whatever you do your class WILL inherit from object and will work as well with or without explicit parent class.
You can read this for more details.
Point 2.1 - I'm not sure I understand the question actually, but anyway:
In Python, objects are not fixed C-struct-like structures with a fixed set of attributes, but dict-like mappings (well there are exceptions but let's ignore them for the moment). The set of attributes of an object is composed of the class attributes (methods mainly but really any name defined at the class level) that are shared between all instances of the class, and instance attributes (belonging to a single instance) which are stored in the instance's __dict__. This imply that you dont define the instance attributes set at the class level (like in Java or C++ etc), but set them on the instance itself.
The __init__ method is there so you can make sure each instance is initialised with the desired set of attributes. It's kind of an equivalent of a Java constructor, but instead of being only used to pass arguments at instanciation, it's also responsible for defining the set of instance attributes for your class (which you would, in Java, define at the class level).
Point 2.2 : self is the current instance of the class (the instance on which the method is called), so if s is an instance of your Stack class, s.push(42) is equivalent to Stack.push(s, 42).
Note that the argument doesn't have to be called self (which is only a convention, albeit a very strong one), the important part is that it's the first argument.
How s get passed as self when calling s.push(42) is a bit intricate at first but an interesting example of how to use a small feature set to build a larger one. You can find a detailed explanation of the whole mechanism here, so I wont bother reposting it here.
The docs say:
If a class defines a slot also defined in a base class, the instance variable defined by the base class slot is inaccessible (except by retrieving its descriptor directly from the base class). This renders the meaning of the program undefined. In the future, a check may be added to prevent this.
How is the undefined behavior introduced ? What would be an example ? How does the instance look like - does it have both attributes somehow ?
Python (2 only?) looks at the value of variable __metaclass__ to determine how to create a type object from a class definition. It is possible to define __metaclass__ at the module or package level, in which case it applies to all subsequent class definitions in that module.
However, I encountered the following in the flufl.enum package's __init__.py:
__metaclass__ = type
Since the default metaclass if __metaclass__ is not defined is type, wouldn't this have no effect? (This assignment would revert to the default if __metaclass__ were assigned to at a higher scope, but I see no such assignment.) What is its purpose?
In Python 2, a declaration __metaclass__ = type makes declarations that would otherwise create old-style classes create new-style classes instead. Only old-style classes use a module level __metaclass__ declaration. New-style classes inherit their metaclass from their base class (e.g. object), unless __metaclass__ is provided as a class variable.
The declaration is not actually used in the code you linked to above (there are no class declarations in the __init__.py file), but it could be. I suspect it was included as part of some boilerplate that makes Python 2 code work more like Python 3 (where all classes are always new-style).
Yes, it has no effect. It's probably just a misunderstanding from flufl.enum's author, or a leftover from previous code.
A "superpackage" __metaclass__ declaration would have no effect because there is no such a thing as Python superpackages.
Why does the following class declaration inherit from object?
class MyClass(object):
...
Is there any reason for a class declaration to inherit from object?
In Python 3, apart from compatibility between Python 2 and 3, no reason. In Python 2, many reasons.
Python 2.x story:
In Python 2.x (from 2.2 onwards) there's two styles of classes depending on the presence or absence of object as a base-class:
"classic" style classes: they don't have object as a base class:
>>> class ClassicSpam: # no base class
... pass
>>> ClassicSpam.__bases__
()
"new" style classes: they have, directly or indirectly (e.g inherit from a built-in type), object as a base class:
>>> class NewSpam(object): # directly inherit from object
... pass
>>> NewSpam.__bases__
(<type 'object'>,)
>>> class IntSpam(int): # indirectly inherit from object...
... pass
>>> IntSpam.__bases__
(<type 'int'>,)
>>> IntSpam.__bases__[0].__bases__ # ... because int inherits from object
(<type 'object'>,)
Without a doubt, when writing a class you'll always want to go for new-style classes. The perks of doing so are numerous, to list some of them:
Support for descriptors. Specifically, the following constructs are made possible with descriptors:
classmethod: A method that receives the class as an implicit argument instead of the instance.
staticmethod: A method that does not receive the implicit argument self as a first argument.
properties with property: Create functions for managing the getting, setting and deleting of an attribute.
__slots__: Saves memory consumptions of a class and also results in faster attribute access. Of course, it does impose limitations.
The __new__ static method: lets you customize how new class instances are created.
Method resolution order (MRO): in what order the base classes of a class will be searched when trying to resolve which method to call.
Related to MRO, super calls. Also see, super() considered super.
If you don't inherit from object, forget these. A more exhaustive description of the previous bullet points along with other perks of "new" style classes can be found here.
One of the downsides of new-style classes is that the class itself is more memory demanding. Unless you're creating many class objects, though, I doubt this would be an issue and it's a negative sinking in a sea of positives.
Python 3.x story:
In Python 3, things are simplified. Only new-style classes exist (referred to plainly as classes) so, the only difference in adding object is requiring you to type in 8 more characters. This:
class ClassicSpam:
pass
is completely equivalent (apart from their name :-) to this:
class NewSpam(object):
pass
and to this:
class Spam():
pass
All have object in their __bases__.
>>> [object in cls.__bases__ for cls in {Spam, NewSpam, ClassicSpam}]
[True, True, True]
So, what should you do?
In Python 2: always inherit from object explicitly. Get the perks.
In Python 3: inherit from object if you are writing code that tries to be Python agnostic, that is, it needs to work both in Python 2 and in Python 3. Otherwise don't, it really makes no difference since Python inserts it for you behind the scenes.
Python 3
class MyClass(object): = New-style class
class MyClass: = New-style class (implicitly inherits from object)
Python 2
class MyClass(object): = New-style class
class MyClass: = OLD-STYLE CLASS
Explanation:
When defining base classes in Python 3.x, you’re allowed to drop the object from the definition. However, this can open the door for a seriously hard to track problem…
Python introduced new-style classes back in Python 2.2, and by now old-style classes are really quite old. Discussion of old-style classes is buried in the 2.x docs, and non-existent in the 3.x docs.
The problem is, the syntax for old-style classes in Python 2.x is the same as the alternative syntax for new-style classes in Python 3.x. Python 2.x is still very widely used (e.g. GAE, Web2Py), and any code (or coder) unwittingly bringing 3.x-style class definitions into 2.x code is going to end up with some seriously outdated base objects. And because old-style classes aren’t on anyone’s radar, they likely won’t know what hit them.
So just spell it out the long way and save some 2.x developer the tears.
Yes, this is a 'new style' object. It was a feature introduced in python2.2.
New style objects have a different object model to classic objects, and some things won't work properly with old style objects, for instance, super(), #property and descriptors. See this article for a good description of what a new style class is.
SO link for a description of the differences: What is the difference between old style and new style classes in Python?
History from Learn Python the Hard Way:
Python's original rendition of a class was broken in many serious
ways. By the time this fault was recognized it was already too late,
and they had to support it. In order to fix the problem, they needed
some "new class" style so that the "old classes" would keep working
but you can use the new more correct version.
They decided that they would use a word "object", lowercased, to be
the "class" that you inherit from to make a class. It is confusing,
but a class inherits from the class named "object" to make a class but
it's not an object really its a class, but don't forget to inherit
from object.
Also just to let you know what the difference between new-style classes and old-style classes is, it's that new-style classes always inherit from object class or from another class that inherited from object:
class NewStyle(object):
pass
Another example is:
class AnotherExampleOfNewStyle(NewStyle):
pass
While an old-style base class looks like this:
class OldStyle():
pass
And an old-style child class looks like this:
class OldStyleSubclass(OldStyle):
pass
You can see that an Old Style base class doesn't inherit from any other class, however, Old Style classes can, of course, inherit from one another. Inheriting from object guarantees that certain functionality is available in every Python class. New style classes were introduced in Python 2.2
Yes, it's historical. Without it, it creates an old-style class.
If you use type() on an old-style object, you just get "instance". On a new-style object you get its class.
The syntax of the class creation statement:
class <ClassName>(superclass):
#code follows
In the absence of any other superclasses that you specifically want to inherit from, the superclass should always be object, which is the root of all classes in Python.
object is technically the root of "new-style" classes in Python. But the new-style classes today are as good as being the only style of classes.
But, if you don't explicitly use the word object when creating classes, then as others mentioned, Python 3.x implicitly inherits from the object superclass. But I guess explicit is always better than implicit (hell)
Reference