I want to do something like the following (in Python 3.7):
class Animal:
def __init__(self, name, legs):
self.legs = legs
print(name)
#classmethod
def with_two_legs(cls, name):
# extremely long code to generate name_full from name
name_full = name
return cls(name_full, 2)
class Human(Animal):
def __init__(self):
super().with_two_legs('Human')
john = Human()
Basically, I want to override the __init__ method of a child class with a factory classmethod of the parent. The code as written, however, does not work, and raises:
TypeError: __init__() takes 1 positional argument but 3 were given
I think this means that super().with_two_legs('Human') passes Human as the cls variable.
1) Why doesn't this work as written? I assumed super() would return a proxy instance of the superclass, so cls would be Animal right?
2) Even if this was the case I don't think this code achieves what I want, since the classmethod returns an instance of Animal, but I just want to initialize Human in the same way classmethod does, is there any way to achieve the behaviour I want?
I hope this is not a very obvious question, I found the documentation on super() somewhat confusing.
super().with_two_legs('Human') does in fact call Animal's with_two_legs, but it passes Human as the cls, not Animal. super() makes the proxy object only to assist with method lookup, it doesn't change what gets passed (it's still the same self or cls it originated from). In this case, super() isn't even doing anything useful, because Human doesn't override with_two_legs, so:
super().with_two_legs('Human')
means "call with_two_legs from the first class above Human in the hierarchy which defines it", and:
cls.with_two_legs('Human')
means "call with_two_legs on the first class in the hierarchy starting with cls that defines it". As long as no class below Animal defines it, those do the same thing.
This means your code breaks at return cls(name_full, 2), because cls is still Human, and your Human.__init__ doesn't take any arguments beyond self. Even if you futzed around to make it work (e.g. by adding two optional arguments that you ignore), this would cause an infinite loop, as Human.__init__ called Animal.with_two_legs, which in turn tried to construct a Human, calling Human.__init__ again.
What you're trying to do is not a great idea; alternate constructors, by their nature, depend on the core constructor/initializer for the class. If you try to make a core constructor/initializer that relies on an alternate constructor, you've created a circular dependency.
In this particular case, I'd recommend avoiding the alternate constructor, in favor of either explicitly providing the legs count always, or using an intermediate TwoLeggedAnimal class that performs the task of your alternate constructor. If you want to reuse code, the second option just means your "extremely long code to generate name_full from name" can go in TwoLeggedAnimal's __init__; in the first option, you'd just write a staticmethod that factors out that code so it can be used by both with_two_legs and other constructors that need to use it.
The class hierarchy would look something like:
class Animal:
def __init__(self, name, legs):
self.legs = legs
print(name)
class TwoLeggedAnimal(Animal)
def __init__(self, name):
# extremely long code to generate name_full from name
name_full = name
super().__init__(name_full, 2)
class Human(TwoLeggedAnimal):
def __init__(self):
super().__init__('Human')
The common code approach would instead be something like:
class Animal:
def __init__(self, name, legs):
self.legs = legs
print(name)
#staticmethod
def _make_two_legged_name(basename):
# extremely long code to generate name_full from name
return name_full
#classmethod
def with_two_legs(cls, name):
return cls(cls._make_two_legged_name(name), 2)
class Human(Animal):
def __init__(self):
super().__init__(self._make_two_legged_name('Human'), 2)
Side-note: What you were trying to do wouldn't work even if you worked around the recursion, because __init__ doesn't make new instances, it initializes existing instances. So even if you call super().with_two_legs('Human') and it somehow works, it's making and returning a completely different instance, but not doing anything to the self received by __init__ which is what's actually being created. The best you'd have been able to do is something like:
def __init__(self):
self_template = super().with_two_legs('Human')
# Cheaty way to copy all attributes from self_template to self, assuming no use
# of __slots__
vars(self).update(vars(self_template))
There is no way to call an alternate constructor in __init__ and have it change self implicitly. About the only way I can think of to make this work in the way you intended without creating helper methods and preserving your alternate constructor would be to use __new__ instead of __init__ (so you can return an instance created by another constructor), and doing awful things with the alternate constructor to explicitly call the top class's __new__ to avoid circular calling dependencies:
class Animal:
def __new__(cls, name, legs): # Use __new__ instead of __init__
self = super().__new__(cls) # Constructs base object
self.legs = legs
print(name)
return self # Returns initialized object
#classmethod
def with_two_legs(cls, name):
# extremely long code to generate name_full from name
name_full = name
return Animal.__new__(cls, name_full, 2) # Explicitly call Animal's __new__ using correct subclass
class Human(Animal):
def __new__(cls):
return super().with_two_legs('Human') # Return result of alternate constructor
The proxy object you get from calling super was only used to locate the with_two_legs method to be called (and since you didn't override it in Human, you could have used self.with_two_legs for the same result).
As wim commented, your alternative constructor with_two_legs doesn't work because the Human class breaks the Liskov substitution principle by having a different constructor signature. Even if you could get the code to call Animal to build your instance, you'd have problems because you'd end up with an Animal instances and not a Human one (so other methods in Human, if you wrote some, would not be available).
Note that this situation is not that uncommon, many Python subclasses have different constructor signatures than their parent classes. But it does mean that you can't use one class freely in place of the other, as happens with a classmethod that tries to construct instances. You need to avoid those situations.
In this case, you are probably best served by using a default value for the legs argument to the Animal constructor. It can default to 2 legs if no alternative number is passed. Then you don't need the classmethod, and you don't run into problems when you override __init__:
class Animal:
def __init__(self, name, legs=2): # legs is now optional, defaults to 2
self.legs = legs
print(name)
class Human(Animal):
def __init__(self):
super().__init__('Human')
john = Human()
Related
According to Python docs super()
is useful for accessing inherited methods that have been overridden in
a class.
I understand that super refers to the parent class and it lets you access parent methods. My question is why do people always use super inside the init method of the child class? I have seen it everywhere. For example:
class Person:
def __init__(self, name):
self.name = name
class Employee(Person):
def __init__(self, **kwargs):
super().__init__(name=kwargs['name']) # Here super is being used
def first_letter(self):
return self.name[0]
e = Employee(name="John")
print(e.first_letter())
I can accomplish the same without super and without even an init method:
class Person:
def __init__(self, name):
self.name = name
class Employee(Person):
def first_letter(self):
return self.name[0]
e = Employee(name="John")
print(e.first_letter())
Are there drawbacks with the latter code? It looks so much cleanr to me. I don't even have to use the boilerplate **kwargs and kwargs['argument'] syntax.
I am using Python 3.8.
Edit: Here's another stackoverflow questions which has code from different people who are using super in the child's init method. I don't understand why. My best guess is there's something new in Python 3.8.
The child might want to do something different or more likely additional to what the super class does - in this case the child must have an __init__.
Calling super’s init means that you don’t have to copy/paste (with all the implications for maintenance) that init in the child’s class, which otherwise would be needed if you wanted some additional code in the child init.
But note there are complications about using super’s init if you use multiple inheritance (e.g. which super gets called) and this needs care. Personally I avoid multiple inheritance and keep inheritance to aminimum anyway - it’s easy to get tempted into creating multiple levels of inheritance/class hierarchy but my experience is that a ‘keep it simple’ approach is usually much better.
The potential drawback to the latter code is that there is no __init__ method within the Employee class. Since there is none, the __init__ method of the parent class is called. However, as soon as an __init__ method is added to the Employee class (maybe there's some Employee-specific attribute that needs to be initialized, like an id_number) then the __init__ method of the parent class is overridden and not called (unless super.__init__() is called) and then an Employee will not have a name attribute.
The correct way to use super here is for both methods to use super. You cannot assume that Person is the last (or at least, next-to-last, before object) class in the MRO.
class Person:
def __init__(self, name, **kwargs):
super().__init__(**kwargs)
self.name = name
class Employee(Person):
# Optional, since Employee.__init__ does nothing
# except pass the exact same arguments "upstream"
def __init__(self, **kwargs):
super().__init__(**kwargs)
def first_letter(self):
return self.name[0]
Consider a class definition like
class Bar:
...
class Foo(Person, Bar):
...
The MRO for Foo looks like [Foo, Person, Bar, object]; the call to super().__init__ inside Person.__init__ would call Bar.__init__, not object.__init__, and Person has no way of knowing if values in **kwargs are meant for Bar, so it must pass them on.
I am a beginner in Python and using Lutz's book to understand OOPS in Python. This question might be basic, but I'd appreciate any help. I researched SO and found answers on "how", but not "why."
As I understand from the book, if Sub inherits Super then one need not call superclass' (Super's) __init__() method.
Example:
class Super:
def __init__(self,name):
self.name=name
print("Name is:",name)
class Sub(Super):
pass
a = Sub("Harry")
a.name
Above code does assign attribute name to the object a. It also prints the name as expected.
However, if I modify the code as:
class Super:
def __init__(self,name):
print("Inside Super __init__")
self.name=name
print("Name is:",name)
class Sub(Super):
def __init__(self,name):
Super(name) #Call __init__ directly
a = Sub("Harry")
a.name
The above code doesn't work fine. By fine, I mean that although Super.__init__() does get called (as seen from the print statements), there is no attribute attached to a. When I run a.name, I get an error, AttributeError: 'Sub' object has no attribute 'name'
I researched this topic on SO, and found the fix on Chain-calling parent constructors in python and Why aren't superclass __init__ methods automatically invoked?
These two threads talk about how to fix it, but they don't provide a reason for why.
Question: Why do I need to call Super's __init__ using Super.__init__(self, name) OR super(Sub, self).__init__(name) instead of a direct call Super(name)?
In Super.__init__(self, name) and Super(name), we see that Super's __init__() gets called, (as seen from print statements), but only in Super.__init__(self, name) we see that the attribute gets attached to Sub class.
Wouldn't Super(name) automatically pass self (child) object to Super? Now, you might ask that how do I know that self is automatically passed? If I modify Super(name) to Super(self,name), I get an error message that TypeError: __init__() takes 2 positional arguments but 3 were given. As I understand from the book, self is automatically passed. So, effectively, we end up passing self twice.
I don't know why Super(name) doesn't attach name attribute to Sub even though Super.__init__() is run. I'd appreciate any help.
For reference, here's the working version of the code based on my research from SO:
class Super:
def __init__(self,name):
print("Inside __init__")
self.name=name
print("Name is:",name)
class Sub(Super):
def __init__(self,name):
#Super.__init__(self, name) #One way to fix this
super(Sub, self).__init__(name) #Another way to fix this
a = Sub("Harry")
a.name
PS: I am using Python-3.6.5 under Anaconda Distribution.
As I understand from the book, if Sub inherits Super then one need not call superclass' (Super's) __init__() method.
This is misleading. It's true that you aren't required to call the superclass's __init__ method—but if you don't, whatever it does in __init__ never happens. And for normal classes, all of that needs to be done. It is occasionally useful, usually when a class wasn't designed to be inherited from, like this:
class Rot13Reader:
def __init__(self, filename):
self.file = open(filename):
def close(self):
self.file.close()
def dostuff(self):
line = next(file)
return codecs.encode(line, 'rot13')
Imagine that you want all the behavior of this class, but with a string rather than a file. The only way to do that is to skip the open:
class LocalRot13Reader(Rot13Reader):
def __init__(self, s):
# don't call super().__init__, because we don't have a filename to open
# instead, set up self.file with something else
self.file = io.StringIO(s)
Here, we wanted to avoid the self.file assignment in the superclass. In your case—as with almost all classes you're ever going to write—you don't want to avoid the self.name assignment in the superclass. That's why, even though Python allows you to not call the superclass's __init__, you almost always call it.
Notice that there's nothing special about __init__ here. For example, we can override dostuff to call the base class's version and then do extra stuff:
def dostuff(self):
result = super().dostuff()
return result.upper()
… or we can override close and intentionally not call the base class:
def close(self):
# do nothing, including no super, because we borrowed our file
The only difference is that good reasons to avoid calling the base class tend to be much more common in normal methods than in __init__.
Question: Why do I need to call Super's __init__ using Super.__init__(self, name) OR super(Sub, self).__init__(name) instead of a direct call Super(name)?
Because these do very different things.
Super(name) constructs a new Super instance, calls __init__(name) on it, and returns it to you. And you then ignore that value.
In particular, Super.__init__ does get called one time either way—but the self it gets called with is that new Super instance, that you're just going to throw away, in the Super(name) case, while it's your own self in the super(Sub, self).__init__(name) case.
So, in the first case, it sets the name attribute on some other object that gets thrown away, and nobody ever sets it on your object, which is why self.name later raises an AttributeError.
It might help you understand this if you add something to both class's __init__ methods to show which instance is involved:
class Super:
def __init__(self,name):
print(f"Inside Super __init__ for {self}")
self.name=name
print("Name is:",name)
class Sub(Super):
def __init__(self,name):
print(f"Inside Sub __init__ for {self}")
# line you want to experiment with goes here.
If that last line is super().__init__(name), super(Sub, self).__init__name), or Super.__init__(self, name), you will see something like this:
Inside Sub __init__ for <__main__.Sub object at 0x10f7a9e80>
Inside Super __init__ for <__main__.Sub object at 0x10f7a9e80>
Notice that it's the same object, the Sub at address 0x10f7a9e80, in both cases.
… but if that last line is Super(name):
Inside Sub __init__ for <__main__.Sub object at 0x10f7a9ea0>
Inside Super __init__ for <__main__.Super object at 0x10f7a9ec0>
Now we have two different objects, at different addresses 0x10f7a9ea0 and 0x10f7a9ec0, and with different types.
If you're curious about what the magic all looks like under the covers, Super(name) does something like this (oversimplifying a bit and skipping over some steps1):
_newobj = Super.__new__(Super)
if isinstance(_newobj, Super):
Super.__init__(_newobj, name)
… while super(Sub, self).__init__(name) does something like this:
_basecls = magically_find_next_class_in_mro(Sub)
_basecls.__init__(self, name)
As a side note, if a book is telling you to use super(Sub, self).__init__(name) or Super.__init__(self, name), it's probably an obsolete book written for Python 2.
In Python 3, you just do this:
super().__init__(name): Calls the correct next superclass by method resolution order. You almost always want this.
super(Sub, self).__init__(name): Calls the correct next superclass—unless you make a mistake and get Sub wrong there. You only need this if you're writing dual-version code that has to run in 2.7 as well as 3.x.
Super.__init__(self, name): Calls Super, whether it's the correct next superclass or not. You only need this if the method resolution order is wrong and you have to work around it.2
If you want to understand more, it's all in the docs, but it can be a bit daunting:
__new__
__init__
super (also see Raymond Hettinger's blog post)
method invocation (also see the HOWTO)
The original introduction to super, __new__, and all the related features was very helpful to me in understanding all of this. I'm not sure if it'll be as helpful to someone who's not coming at this already understanding old-style Python classes, but it's pretty well written, and Guido (obviously) knows what he's talking about, so it might be worth reading.
1. The biggest cheat in this explanation is that super actually returns a proxy object that acts like _baseclass bound to self in the same way methods are bound, which can be used to bind methods, like __init__. This is useful/interesting knowledge if you know how methods work, but probably just extra confusion if you don't.
2. … or if you're working with old-style classes, which don't support super (or proper method-resolution order). This never comes up in Python 3, which doesn't have old-style classes. But, unfortunately, you will see it in lots of tkinter examples, because the best tutorial is still Effbot's, which was written for Python 2.3, when Tkinter was all old-style classes, and has never been updated.
Super(name) is not a "direct call" to the superclass __init__. After all, you called Super, not Super.__init__.
Super.__init__ takes an uninitialized Super instance and initializes it. Super creates and initializes a new, completely separate instance from the one you wanted to initialize (and then you immediately throw the new instance away). The instance you wanted to initialize is untouched.
Super(name) instantiates a new instance of super. Think of this example:
def __init__(self, name):
x1 = Super(name)
x2 = Super("some other name")
assert x1 is not self
assert x2 is not self
In order to explicitly call The Super's constructor on the current instance, you'd have to use the following syntax:
def __init__(self, name):
Super.__init__(self, name)
Now, maybe you don't want read further if you are a beginner.
If you do, you will see that there is a good reason to use super(Sub, self).__init__(name) (or super().__init__(name) in Python 3) instead of Super.__init__(self, name).
Super.__init__(self, name) works fine, as long as you are certain that Super is in fact your superclass. But in fact, you don't know ever that for sure.
You could have the following code:
class Super:
def __init__(self):
print('Super __init__')
class Sub(Super):
def __init__(self):
print('Sub __init__')
Super.__init__(self)
class Sub2(Super):
def __init__(self):
print('Sub2 __init__')
Super.__init__(self)
class SubSub(Sub, Sub2):
pass
You would now expect that SubSub() ends up calling all of the above constructors, but it does not:
>>> x = SubSub()
Sub __init__
Super __init__
>>>
To correct it, you'd have to do:
class Super:
def __init__(self):
print('Super __init__')
class Sub(Super):
def __init__(self):
print('Sub __init__')
super().__init__()
class Sub2(Super):
def __init__(self):
print('Sub2 __init__')
super().__init__()
class SubSub(Sub, Sub2):
pass
Now it works:
>>> x = SubSub()
Sub __init__
Sub2 __init__
Super __init__
>>>
The reason is that the super class of Sub is declared to be Super, in case of multiple inheritance in class SubSub, Python's MRO establishes the inheritance as: SubSub inherits from Sub, which inherits from Sub2, which inherits from Super, which inherits from object.
You can test that:
>>> SubSub.__mro__
(<class '__main__.SubSub'>, <class '__main__.Sub'>, <class '__main__.Sub2'>, <class '__main__.Super'>, <class 'object'>)
Now, the super() call in constructors of each of the classes finds the next class in the MRO so that the constructor of that class can be called.
See https://www.python.org/download/releases/2.3/mro/
I need to create an UNBOUND method call to Plant to setup name and leaves and I don't know how. Any help is appreciated.
My code:
class Plant(object):
def __init__(self, name : str, leaves : int):
self.plant_name = name
self.leaves = leaves
def __str__(self):
return "{} {}".format(self.plant_name, self.leaves)
def __eq__(self, plant1):
if self.leaves == plant1.leaves:
return self.leaves
def __It__(self, plant1):
if self.leaves < plant1.leaves:
print ("{} has more leaves than {}".format(plant1.plant_name, self.plant_name))
return self.leaves < plant1.leaves
elif self.leaves > plant1.leaves:
print ("{} has more leaves than {}".format(self.plant_name, plant1.plant_name))
return self.leaves < plant1.leaves
class Flower(Plant):
def __init__(self, color : str, petals : int):
self.color = color
self.petals = petals
def pick_petal(self.petals)
self.petals += 1
Exact wording of the assignment:
Create a new class called Flower. Flower is subclassed from the Plant class; so besides name, and leaves, it adds 2 new attributes; color, petals. Color is a string that contains the color of the flower, and petal is an int that has the number of petals on the flower. You should be able to create an init method to setup the instance. With the init you should make an UNBOUND method call to plant to setup the name and leaves. In addition, create a method called pick_petal that decrements the number of petals on the flower.
An "unbound method call" means you're calling a method on the class rather than on an instance of the class. That means something like Plant.some_method.
The only sort of unbound call that makes sense in this context is to call the __init__ method of the base class. That seems to fulfill the requirement to "setup the names and leaves", and in the past was the common way to do inheritance.
It looks like this:
class Flower(Plant):
def __init__(self, name, leaves, color, petals):
Plant.__init__(self, ...)
...
You will need to pass in the appropriate arguments to __init__. The first is self, the rest are defined by Plant.__init__ in the base class. You'll also need to fix the syntax for the list of arguments, as `color : str' is not valid python.
Note: generally speaking, a better solution is to call super rather than doing an unbound method call on the parent class __init__. I'm not sure what you can do with that advice, though. Maybe the instructor is having you do inheritance the old way first before learning the new way?
For this assignment you should probably use Plant.__init__(...) since that's what the assignment is explicitly asking you to do. You might follow up with the instructor to ask about super.
The answer from Bryan is perfect. Just for the sake of completion:
# Looks like the assignment asks for this
class Flower(Plant):
def __init__(self, name, leaves, color, petals):
# call __init__ from parent so you don't repeat code already there
Plant.__init__(self, name, leaves)
self.color = color
self.petals = petals
This is the "classic", "non-cooperative" inheritance style and came out of fashion a long time ago (almost 15 years as of 2016), because it breaks with multiple inheritance. For reference see the post "Unifying types and classes in Python 2.2" by the BDFL. At first I thought it could be a very old assignment, but I see the assignment uses the "new-style" inheritance (inheriting from object was the signature of the new-style in Python 2 because the default is the old-style, in Python 3 there is only the new-style). In order to make it work for multiple inheritance, instead of calling the parent class explicitly (the Plant.__init__ statement), we use the super function like this in Python 2:
super(Flower, self).__init__(name, leaves)
Or just this after Python 3 (after PEP 0367 to be precise):
super().__init__(name, leaves)
Even if in Python 3 the new-style of inheritance is the default, you are still encouraged to explicitly inherit from object.
I'm enhancing an existing class that does some calculations in the __init__ function to determine the instance state. Is it ok to call __init__() from __getstate__() in order to reuse those calculations?
To summarize reactions from Kroltan and jonsrharpe:
Technically it is OK
Technically it will work and if you do it properly, it can be considered OK.
Practically it is tricky, avoid that
If you edit the code in future and touch __init__, then it is easy (even for you) to forget about use in __setstate__ and then you enter into difficult to debug situation (asking yourself, where it comes from).
class Calculator():
def __init__(self):
# some calculation stuff here
def __setstate__(self, state)
self.__init__()
The calculation stuff is better to get isolated into another shared method:
class Calculator():
def __init__(self):
self._shared_calculation()
def __setstate__(self, state)
self._shared_calculation()
def _shared_calculation(self):
#some calculation stuff here
This way you shall notice.
Note: use of "_" as prefix for the shared method is arbitrary, you do not have to do that.
It's usually preferable to write a method called __getnewargs__ instead. That way, the Pickling mechanism will call __init__ for you automatically.
Another approach is to Customize the constructor class __init__ in a subclass. Ideally it is better to have to one Constructor class & change according to your need in Subclass
class Person:
def __init__(self, name, job=None, pay=0):
self.name = name
self.job = job
self.pay = pay
class Manager(Person):
def __init__(self, name, pay):
Person.__init__(self, name, 'title', pay) # Run constructor with 'title'
Calling constructors class this way turns out to be a very common coding pattern in Python. By itself, Python uses inheritance to look for and call only one __init__ method at construction time—the lowest one in the class tree.
If you need higher __init__ methods to be run at construction time, you must call them manually, and usually through the superclass name as in shown in the code above. his way you augment the Superclass constructor & replace the logic in subclass altogether to your liking
As suggested by Jan it is tricky & you will enter difficult debug situation if you call it in same class
i think you can defined either '__init__' or '__new__' in a class,but why all defined in django.utils.datastructures.py.
my code:
class a(object):
def __init__(self):
print 'aaa'
def __new__(self):
print 'sss'
a()#print 'sss'
class b:
def __init__(self):
print 'aaa'
def __new__(self):
print 'sss'
b()#print 'aaa'
datastructures.py:
class SortedDict(dict):
"""
A dictionary that keeps its keys in the order in which they're inserted.
"""
def __new__(cls, *args, **kwargs):
instance = super(SortedDict, cls).__new__(cls, *args, **kwargs)
instance.keyOrder = []
return instance
def __init__(self, data=None):
if data is None:
data = {}
super(SortedDict, self).__init__(data)
if isinstance(data, dict):
self.keyOrder = data.keys()
else:
self.keyOrder = []
for key, value in data:
if key not in self.keyOrder:
self.keyOrder.append(key)
and what circumstances the SortedDict.__init__ will be call.
thanks
You can define either or both of __new__ and __init__.
__new__ must return an object -- which can be a new one (typically that task is delegated to type.__new__), an existing one (to implement singletons, "recycle" instances from a pool, and so on), or even one that's not an instance of the class. If __new__ returns an instance of the class (new or existing), __init__ then gets called on it; if __new__ returns an object that's not an instance of the class, then __init__ is not called.
__init__ is passed a class instance as its first item (in the same state __new__ returned it, i.e., typically "empty") and must alter it as needed to make it ready for use (most often by adding attributes).
In general it's best to use __init__ for all it can do -- and __new__, if something is left that __init__ can't do, for that "extra something".
So you'll typically define both if there's something useful you can do in __init__, but not everything you want to happen when the class gets instantiated.
For example, consider a class that subclasses int but also has a foo slot -- and you want it to be instantiated with an initializer for the int and one for the .foo. As int is immutable, that part has to happen in __new__, so pedantically one could code:
>>> class x(int):
... def __new__(cls, i, foo):
... self = int.__new__(cls, i)
... return self
... def __init__(self, i, foo):
... self.foo = foo
... __slots__ = 'foo',
...
>>> a = x(23, 'bah')
>>> print a
23
>>> print a.foo
bah
>>>
In practice, for a case this simple, nobody would mind if you lost the __init__ and just moved the self.foo = foo to __new__. But if initialization is rich and complex enough to be best placed in __init__, this idea is worth keeping in mind.
__new__ and __init__ do completely different things. The method __init__ initiates a new instance of a class --- it is a constructor. __new__ is a far more subtle thing --- it can change arguments and, in fact, the class of the initiated object. For example, the following code:
class Meters(object):
def __new__(cls, value):
return int(value / 3.28083)
If you call Meters(6) you will not actually create an instance of Meters, but an instance of int. You might wonder why this is useful; it is actually crucial to metaclasses, an admittedly obscure (but powerful) feature.
You'll note that in Python 2.x, only classes inheriting from object can take advantage of __new__, as you code above shows.
The use of __new__ you showed in django seems to be an attempt to keep a sane method resolution order on SortedDict objects. I will admit, though, that it is often hard to tell why __new__ is necessary. Standard Python style suggests that it not be used unless necessary (as always, better class design is the tool you turn to first).
My only guess is that in this case, they (author of this class) want the keyOrder list to exist on the class even before SortedDict.__init__ is called.
Note that SortedDict calls super() in its __init__, this would ordinarily go to dict.__init__, which would probably call __setitem__ and the like to start adding items. SortedDict.__setitem__ expects the .keyOrder property to exist, and therein lies the problem (since .keyOrder isn't normally created until after the call to super().) It's possible this is just an issue with subclassing dict because my normal gut instinct would be to just initialize .keyOrder before the call to super().
The code in __new__ might also be used to allow SortedDict to be subclassed in a diamond inheritance structure where it is possible SortedDict.__init__ is not called before the first __setitem__ and the like are called. Django has to contend with various issues in supporting a wide range of python versions from 2.3 up; it's possible this code is completely un-neccesary in some versions and needed in others.
There is a common use for defining both __new__ and __init__: accessing class properties which may be eclipsed by their instance versions without having to do type(self) or self.__class__ (which, in the existence of metaclasses, may not even be the right thing).
For example:
class MyClass(object):
creation_counter = 0
def __new__(cls, *args, **kwargs):
cls.creation_counter += 1
return super(MyClass, cls).__new__(cls)
def __init__(self):
print "I am the %dth myclass to be created!" % self.creation_counter
Finally, __new__ can actually return an instance of a wrapper or a completely different class from what you thought you were instantiating. This is used to provide metaclass-like features without actually needing a metaclass.
In my opinion, there was no need of overriding __new__ in the example you described.
Creation of an instance and actual memory allocation happens in __new__, __init__ is called after __new__ and is meant for initialization of instance serving the job of constructor in classical OOP terms. So, if all you want to do is initialize variables, then you should go for overriding __init__.
The real role of __new__ comes into place when you are using Metaclasses. There if you want to do something like changing attributes or adding attributes, that must happen before the creation of class, you should go for overriding __new__.
Consider, a completely hypothetical case where you want to make some attributes of class private, even though they are not defined so (I'm not saying one should ever do that).
class PrivateMetaClass(type):
def __new__(metaclass, classname, bases, attrs):
private_attributes = ['name', 'age']
for private_attribute in private_attributes:
if attrs.get(private_attribute):
attrs['_' + private_attribute] = attrs[private_attribute]
attrs.pop(private_attribute)
return super(PrivateMetaClass, metaclass).__new__(metaclass, classname, bases, attrs)
class Person(object):
__metaclass__ = PrivateMetaClass
name = 'Someone'
age = 19
person = Person()
>>> hasattr(person, 'name')
False
>>> person._name
'Someone'
Again, It's just for instructional purposes I'm not suggesting one should do anything like this.