Is there any way to connect 2 classes (without merging them in 1) and thus avoiding repetition under statement if a: in class Z?
class A:
def __init__(self, a):
self.a = a
self.b = self.a + self.a
class Z:
def __init__(self, z, a=None):
self.z = z
if a: # this part seems like repetition
self.a = a.a
self.b = a.b
a = A('hello')
z = Z('world', a)
assert z.a == a.a # hello
assert z.b == a.b # hellohello
Wondering if python has some tools. I would prefer to avoid loop over instance variables and using setattr.
Something like inheriting from class A to class Z, Z(A) or such.
Here's a trivial example of class inheritance that may help you to understand:
class A:
def __init__(self, a):
self._a = a
self._b = self.a + self.a
class Z(A):
def __init__(self, z, a):
super().__init__(a)
self._z = z
clazz = Z('Hello', 'world')
print(clazz._z, clazz._a)
Conceptually, the standard techniques for associating an A instance with a Z instance are:
Using composition (and delegation)
"Composition" simply means that the A instance itself is an attribute of the Z instance. We call this a "has-a" relationship: every Z has an A that's associated with it.
In normal cases, we can simply pass the A instance to the Z constructor, and have it assign an attribute in __init__. Thus:
class A:
def __init__(self, a):
self.a = a
self.b = self.a + self.a
def action(self): # added for demonstration purposes.
pass
class Z:
def __init__(self, z, a=None):
self.z = z
self._a = a # if not None, this will be an `A` instance
Notice that the attribute for the a instance is specially named to avoid conflicting with the A class attribute names. This is to avoid ambiguity (calling it .a makes one wonder whether my_z.a should get the .a attribute from the A instance, or the entire instance), and to mark it as an implementation detail (normally, outside code won't have a good reason to get the entire A instance out of the Z; the entire point of delegation is to make it so that users of Z don't have to worry about A's interface).
One important limitation is that the composition relationship is one-way by nature: self._a = a gives the Z class access to A contents, but not the other way around. (Of course, it's also possible to build the relationship in both directions, but this will require some planning ahead.)
"Delegation" means that we use some scheme in the code, so that looking something up in a Z instance finds it in the composed A instance when necessary. There are multiple ways to achieve this in Python, at least two of which are worth mentioning:
Explicit delegation per attribute
We define a separate property in the Z class, for each attribute we want to delegate. For example:
# within the `Z` class
#property
def a(self):
return self._a.a
# The setter can also be omitted to make a read-only attribute;
# alternately, additional validation logic can be added to the function.
#a.setter
def a(self, value):
self._a.a = value
For methods, using the same property approach should work, but it may be simpler to make a wrapper function and calling it:
def action(self):
return self._a.action()
Delegation via __getattr__
The __getattr__ magic ("dunder") method allows us to provide fallback logic for looking up an attribute in a class, if it isn't found by the normal means. We can use this for the Z class, so that it will try looking within its _a if all else fails. This looks like:
def __getattr__(self, name):
return getattr(self._a, name)
Here, we use the free function getattr to look up the name dynamically within the A instance.
Using inheritance
This means that each Z instance will, conceptually, be a kind of A instance - classes represent types, and inheriting Z from A means that it will be a subtype of A.
We call this an "is-a" relationship: every Z instance is an A instance. More precisely, a Z instance should be usable anywhere that an A instance could be used, but also Z might contain additional data and/or use different implementations.
This approach looks like:
class A:
def __init__(self, a):
self.a = a
self.b = self.a + self.a
def action(self): # added for demonstration purposes.
return f'{self.z.title()}, {self.a}!'
class Z(A):
def __init__(self, z, a):
# Use `a` to do the `A`-specific initialization.
super().__init__(a)
# Then do `Z`-specific initialization.
self.z = z
The super function is magic that finds the A.__init__ function, and calls it as a method on the Z instance that's currently being initialized. (That is: self will be the same object for both __init__ calls.)
This is clearly more convenient than the delegation and composition approach. Our Z instance actually has a and b attributes as well as z, and also actually has a action method. Thus, code like my_z.action() will use the method from the A class, and accessing the a and b attributes of a Z instance works - because the Z instance actually directly contains that data.
Note in this example that the code for action now tries to use self.z. this won't work for an A instance constructed directly, but it does work when we construct a Z and call action on it:
>>> A('world').action()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 6, in action
AttributeError: 'A' object has no attribute 'z'
>>> Z('hello', 'world').action()
'Hello, world!'
We say that such an A class, which doesn't properly function on its own, is abstract. (There are more tools we can use to prevent accidentally creating an unusable base A; these are outside the scope of this answer.)
This convenience comes with serious implications for design. It can be hard to reason about deep inheritance structures (where the A also inherits from B, which inherits from C...) and especially about multiple inheritance (Z can inherit from B as well as A). Doing these things requires careful planning and design, and a more detailed understanding of how super works - beyond the scope of this answer.
Inheritance is also less flexible. For example, when the Z instance composes an A instance, it's easy to swap that A instance out later for another one. Inheritance doesn't offer that option.
Using mixins
Essentially, using a mixin means using inheritance (generally, multiple inheritance), even though we conceptually want a "has-a" relationship, because the convenient usage patterns are more important than the time spent designing it all up front. It's a complex, but powerful design pattern that essentially lets us build a new class from component parts.
Typically, mixins will be abstract (in the sense described in the previous section). Most examples of mixins also won't contain data attributes, but only methods, because they're generally designed specifically to implement some functionality. (In some programming languages, when using multiple inheritance, only one base class is allowed to contain data. However, this restriction is not necessary and would make no sense in Python, because of how objects are implemented.)
One specific technique common with mixins is that the first base class listed will be an actual "base", while everything else is treated as "just" an abstract mixin. To keep things organized while initializing all the mixins based on the original Z constructor arguments, we use keyword arguments for everything that will be passed to the mixins, and let each mixin use what it needs from the **kwargs.
class Root:
# We use this to swallow up any arguments that were passed "too far"
def __init__(self, *args, **kwargs):
pass
class ZBase(Root):
def __init__(self, z, **kwargs):
# a common pattern is to just accept arbitrary keyword arguments
# that are passed to all the mixins, and let each one sort out
# what it needs.
super().__init__(**kwargs)
self.z = z
class AMixin(Root):
def __init__(self, **kwargs):
# This `super()` call is in case more mixins are used.
super().__init__(**kwargs)
self.a = kwargs['a']
self.b = self.a + self.a
def func(self): # This time, we'll make it do something
return f'{self.z.title()}, {self.a}!'
# We combine the base with the mixins by deriving from both.
# Normally there is no reason to add any more logic here.
class Z(ZBase, AMixin): pass
We can use this like:
>>> # we use keyword arguments for all the mixins' arguments
>>> my_z = Z('hello', a='world')
>>> # now the `Z` instance has everything defined in both base and mixin:
>>> my_z.func()
'Hello, world!'
>>> my_z.z
'hello'
>>> my_z.a
'world'
>>> my_z.b
'worldworld'
The code in AMixin can't stand on its own:
>>> AMixin(a='world').func()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 8, in func
AttributeError: 'AMixin' object has no attribute 'z'
but when the Z instance has both ZBase and AMixin as bases, and is used to call func, the z attribute can be found - because now self is a Z instance, which has that attribute.
The super logic here is a bit tricky. The details are beyond the scope of this post, but suffice to say that with mixin classes that are set up this way, super will forward to the next, sibling base of Z, as long as there is one. It will do this no matter what order the mixins appear in; the Z instance determines the order, and super calls whatever is "next in line". When all the bases have been consulted, next in line is Root, which is just there to intercept the kwargs (since the last mixin doesn't "know" it's last, and passes them on). This is necessary because otherwise, next in line would be object, and object.__init__ raises an exception if there are any arguments.
For more details, see What is a mixin and why is it useful?.
I have a base class with some methods and some (potentially many) child classes which modify some aspects of the base class. Additionally, there is a common modification that I'd like to be able to apply to any of the child classes. This modification involves overriding a method. I feel like there should be a simple OOP approach to achieving this, but I don't know what it is.
Say I have a parent class (Base) and some child classes (A and B):
class Base:
def __init__(self):
self.x = 0
def go(self):
self.x+=1
def printme(self):
print(f'x: {self.x}')
class A(Base): #subclasses Base
def go(self):
self.x-=1
class B(Base): #subclasses Base
def go(self):
self.x-=2
a = A()
b = B()
I want to be able to update the .printme() method of objects like a or b
but this doesn't work
def printme(self):
print(f'modified printing of x: {self.x}')
a.printme = printme
a.printme() #raises TypeError: printme() missing 1 required positional argument: 'self'
(I've seen this solution but it seems hacky and I think there must be a better way.)
I also can't think of a way to use multiple inheritance: if I write a single subclass of Base with the updated printme() function, then I can't generate objects that have the go() methods of class A or B without writing separate subclasses for each of those.
After thinking about it for a while longer, I realized it would be possible to dynamically create a new subclass from any child class and override the method like this:
def with_mod(Class):
class ClassWithMod(Class):
def printme(self):
print(f'new statement: {self.x}')
return ClassWithMod
a = A()
a.printme() #prints "x: 0"
a_mod = with_mod(A)()
a_mod.printme() #prints "new statement: 0"
I tested this code and it works, but is this the "correct" approach or is there a more correct/pythonic OOP approach? What I don't like about this solution is that type(a_mod) is the generic ClassWithMod. I'd like it to still be of type A. (Maybe this means I want to override the method at the instance level rather than the class level?)
There are two main ways for a derived class to call a base class's methods.
Base.method(self):
class Derived(Base):
def method(self):
Base.method(self)
...
or super().method():
class Derived(Base):
def method(self):
super().method()
...
Suppose I now do this:
obj = Derived()
obj.method()
As far as I know, both Base.method(self) and super().method() do the same thing. Both will call Base.method with a reference to obj. In particular, super() doesn't do the legwork to instantiate an object of type Base. Instead, it creates a new object of type super and grafts the instance attributes from obj onto it, then it dynamically looks up the right attribute from Base when you try to get it from the super object.
The super() method has the advantage of minimizing the work you need to do when you change the base for a derived class. On the other hand, Base.method uses less magic and may be simpler and clearer when a class inherits from multiple base classes.
Most of the discussions I've seen recommend calling super(), but is this an established standard among Python coders? Or are both of these methods widely used in practice? For example, answers to this stackoverflow question go both ways, but generally use the super() method. On the other hand, the Python textbook I am teaching from this semester only shows the Base.method approach.
Using super() implies the idea that whatever follows should be delegated to the base class, no matter what it is. It's about the semantics of the statement. Referring explicitly to Base on the other hand conveys the idea that Base was chosen explicitly for some reason (perhaps unknown to the reader), which might have its applications too.
Apart from that however there is a very practical reason for using super(), namely cooperative multiple inheritance. Suppose you've designed the following class hierarchy:
class Base:
def test(self):
print('Base.test')
class Foo(Base):
def test(self):
print('Foo.test')
Base.test(self)
class Bar(Base):
def test(self):
print('Bar.test')
Base.test(self)
Now you can use both Foo and Bar and everything works as expected. However these two classes won't work together in a multiple inheritance schema:
class Test(Foo, Bar):
pass
Test().test()
# Output:
# Foo.test
# Base.test
That last call to test skips over Bar's implementation since Foo didn't specify that it wants to delegate to the next class in method resolution order but instead explicitly specified Base. Using super() resolves this issue:
class Base:
def test(self):
print('Base.test')
class Foo(Base):
def test(self):
print('Foo.test')
super().test()
class Bar(Base):
def test(self):
print('Bar.test')
super().test()
class Test(Foo, Bar):
pass
Test().test()
# Output:
# Foo.test
# Bar.test
# Base.test
I was reading some Python code in a private repository on GitHub and found a class resembling the one below:
class Point(object):
'''Models a point in 2D space. '''
def __init__(self, x, y):
super(Point, self).__init__()
self.x = x
self.y = y
# some methods
def __repr__(self):
return 'Point({}, {})'.format(self.x, self.y)
I do understand the importance and advantages of using the keyword super while initialising classes. Personally, I find the first statement in the __init__ to be redundant as all the Python classes inherit from object. So, I want to know what are the advantages(if any) of initializing a Point object using super when inheriting from the base object class in Python ?
There is none in this particular case as object.__init__() is an empty method. However, if you were to inherit from something else, or if you were to use multiple inheritance, call to the super.__init__() would ensure that your classes get properly initialized (assuming of course they depend on initialization from their parent classes). Without that Python's MRO cannot do its magic, for example.
It is pretty easy to implement __len__(self) method in Python so that it handles len(inst) calls like this one:
class A(object):
def __len__(self):
return 7
a = A()
len(a) # gives us 7
And there are plenty of alike methods you can define (__eq__, __str__, __repr__ etc.).
I know that Python classes are objects as well.
My question: can I somehow define, for example, __len__ so that the following works:
len(A) # makes sense and gives some predictable result
What you're looking for is called a "metaclass"... just like a is an instance of class A, A is an instance of class as well, referred to as a metaclass. By default, Python classes are instances of the type class (the only exception is under Python 2, which has some legacy "old style" classes, which are those which don't inherit from object). You can check this by doing type(A)... it should return type itself (yes, that object has been overloaded a little bit).
Metaclasses are powerful and brain-twisting enough to deserve more than the quick explanation I was about to write... a good starting point would be this stackoverflow question: What is a Metaclass.
For your particular question, for Python 3, the following creates a metaclass which aliases len(A) to invoke a class method on A:
class LengthMetaclass(type):
def __len__(self):
return self.clslength()
class A(object, metaclass=LengthMetaclass):
#classmethod
def clslength(cls):
return 7
print(len(A))
(Note: Example above is for Python 3. The syntax is slightly different for Python 2: you would use class A(object):\n __metaclass__=LengthMetaclass instead of passing it as a parameter.)
The reason LengthMetaclass.__len__ doesn't affect instances of A is that attribute resolution in Python first checks the instance dict, then walks the class hierarchy [A, object], but it never consults the metaclasses. Whereas accessing A.__len__ first consults the instance A, then walks it's class hierarchy, which consists of [LengthMetaclass, type].
Since a class is an instance of a metaclass, one way is to use a custom metaclass:
>>> Meta = type('Meta', (type,), {'__repr__': lambda cls: 'class A'})
>>> A = Meta('A', (object,), {'__repr__': lambda self: 'instance of class A'})
>>> A
class A
>>> A()
instance of class A
I fail to see how the Syntax specifically is important, but if you really want a simple way to implement it, just is the normal len(self) that returns len(inst) but in your implementation make it return a class variable that all instances share:
class A:
my_length = 5
def __len__(self):
return self.my_length
and you can later call it like that:
len(A()) #returns 5
obviously this creates a temporary instance of your class, but length only makes sense for an instance of a class and not really for the concept of a class (a Type object).
Editing the metaclass sounds like a very bad idea and unless you are doing something for school or to just mess around I really suggest you rethink this idea..
try this:
class Lengthy:
x = 5
#classmethod
def __len__(cls):
return cls.x
The #classmethod allows you to call it directly on the class, but your len implementation won't be able to depend on any instance variables:
a = Lengthy()
len(a)