I am given a designated factory of A-type objects. I would like to make a new version of A-type objects that also have the methods in a Mixin class. For reasons that are too long to explain here, I can't use class A(Mixin), I have to use the A_factory. Below I try to give a bare bones example.
I thought naively that it would be sufficient to inherit from Mixin to endow A-type objects with the mixin methods, but the attempts below don't work:
class A: pass
class A_factory:
def __new__(self):
return A()
class Mixin:
def method(self):
print('aha!')
class A_v2(Mixin): # attempt 1
def __new__(cls):
return A_factory()
class A_v3(Mixin): # attempt 2
def __new__(cls):
self = A_factory()
super().__init__(self)
return self
In fact A_v2().method() and A_v3().method() raises AttributeError: 'A' object has no attribute 'method'.
What is the correct way of using A_factory within class A_vn(Mixin) so that A-type objects created by the factory inherit the mixin methods?
There's no obvious reason why you should need __new__ for what you're showing here. There's a nice discussion here on the subject: Why is __init__() always called after __new__()?
If you try the below it should work:
class Mixin:
def method(self):
print('aha!')
class A(Mixin):
def __init__(self):
super().__init__()
test = A()
test.method()
If you need to use a factory method, it should be a function rather than a class. There's a very good discussion of how to use factory methods here: https://realpython.com/factory-method-python/
Related
I've got some code where I need to refer to a superclass when defining stuff in a derived class:
class Base:
def foo(self):
print('foo')
def bar(self):
print('bar')
class Derived_A(Base):
meth = Base.foo
class Derived_B(Base):
meth = Base.bar
Derived_A().meth()
Derived_B().meth()
This works, but I don't like verbatim references to Base in derived classes. Is there a way to use super or alike for this?
You can't do that.
class keyword in Python is used to create classes which are instances of type type. In it's simplified version, it does the following:
Python creates a namespace and executes the body of the class in that namespace so that it will be populated with all methods and attributes and so on...
Then calls the three-arguments form of type(). The result of this call is your class which is then assign to a symbol which is the name of your class.
The point is when the body of the class is being executed. It doesn't know about the "bases". Those bases are passed to the type() after that.
I also explained the reasons why you can't use super() here.
Does this work for you?
class Base:
def foo(self):
print('foo')
def bar(self):
print('bar')
class Derived_A(Base):
def __init__(self):
self.meth = super().foo
class Derived_B(Base):
def __init__(self):
self.meth = super().bar
a = Derived_A().meth()
b = Derived_B().meth()
You'll need to lookup the method on the base class after the new type is created. In the body of the class definition, the type and base classes are not accessible.
Something like:
class Derived_A(Base):
def meth(self):
return super().foo()
Now, it is possible to do some magic behind the scenes to expose Base to the scope of the class definition as its being executed, but that's much dirtier, and would mean that you'd need to supply a metaclass in your class definition.
Since you want "magic", there is still one sane option we can take before diving into metaclasses. Requires Python 3.9+
def alias(name):
def inner(cls):
return getattr(cls, name).__get__(cls)
return classmethod(property(inner))
class Base:
def foo(self):
...
class Derived_A(Base):
meth = alias("foo")
Derived_A().meth() # works
Derived_A.meth() # also works
Yes, this does require passing the method name as a string, which destroys your IDE and typechecker's ability to reason about it. But there isn't a good way to get what you are wanting without some compromises like that.
Really, a bit of redundancy for readability is probably worth it here.
There are two main ways for a derived class to call a base class's methods.
Base.method(self):
class Derived(Base):
def method(self):
Base.method(self)
...
or super().method():
class Derived(Base):
def method(self):
super().method()
...
Suppose I now do this:
obj = Derived()
obj.method()
As far as I know, both Base.method(self) and super().method() do the same thing. Both will call Base.method with a reference to obj. In particular, super() doesn't do the legwork to instantiate an object of type Base. Instead, it creates a new object of type super and grafts the instance attributes from obj onto it, then it dynamically looks up the right attribute from Base when you try to get it from the super object.
The super() method has the advantage of minimizing the work you need to do when you change the base for a derived class. On the other hand, Base.method uses less magic and may be simpler and clearer when a class inherits from multiple base classes.
Most of the discussions I've seen recommend calling super(), but is this an established standard among Python coders? Or are both of these methods widely used in practice? For example, answers to this stackoverflow question go both ways, but generally use the super() method. On the other hand, the Python textbook I am teaching from this semester only shows the Base.method approach.
Using super() implies the idea that whatever follows should be delegated to the base class, no matter what it is. It's about the semantics of the statement. Referring explicitly to Base on the other hand conveys the idea that Base was chosen explicitly for some reason (perhaps unknown to the reader), which might have its applications too.
Apart from that however there is a very practical reason for using super(), namely cooperative multiple inheritance. Suppose you've designed the following class hierarchy:
class Base:
def test(self):
print('Base.test')
class Foo(Base):
def test(self):
print('Foo.test')
Base.test(self)
class Bar(Base):
def test(self):
print('Bar.test')
Base.test(self)
Now you can use both Foo and Bar and everything works as expected. However these two classes won't work together in a multiple inheritance schema:
class Test(Foo, Bar):
pass
Test().test()
# Output:
# Foo.test
# Base.test
That last call to test skips over Bar's implementation since Foo didn't specify that it wants to delegate to the next class in method resolution order but instead explicitly specified Base. Using super() resolves this issue:
class Base:
def test(self):
print('Base.test')
class Foo(Base):
def test(self):
print('Foo.test')
super().test()
class Bar(Base):
def test(self):
print('Bar.test')
super().test()
class Test(Foo, Bar):
pass
Test().test()
# Output:
# Foo.test
# Bar.test
# Base.test
I would like to use an abstract base class to force implementation of a class attribute in a concrete class. I know how to force implementation of a generic attribute using #abc.abstractproperty. There are lots of SO answers about how to do that - I've read about 10 of them :) But I would like to ensure that the concrete class must define the abstract attribute as a class attribute and NOT as an instance attribute. Anyone know how to do this?
EDITED to address question:
I have users who will define concrete classes from the ABC. Certain abstract properties need to be "concretized" as class attributes. The checking needs to happen the first time they instantiate the concrete class - not sooner. Ideally, if they mistakenly define an abstract property as an instance attribute, a TypeError will be raised that flags their mistake.
The point is that the value of the class attribute should be the same for all instances of the concrete class.
I think I am missing some knowledge about Python internals that would help me address this question properly...
import abc
class MyABC(object):
__metaclass__ = abc.ABCMeta
#abc.abstractproperty
def foo():
return 'we never run this line'
# I want to enforce this kind of subclassing
class GoodConcrete(MyABC):
#classmethod
def foo(cls):
return 1 # value is the same for all class instances
# I want to forbid this kind of subclassing
class BadConcrete(MyABC):
def foo(self, val):
self.foo = val
First day learning Python, please excuse the basic question.
Assuming I have been given an object which contains an unimplemented method that I need to implement, e.g:
class myclass():
def __init__(self)
self.unimplementedmethod = False
What is the correct way to implement this in an instantiated object? I do not want to alter the base class in any way.
I have experimented and found the following code seems to work, but is it correct/good style?
def methodimplementation():
print("method called")
myobject = myclass()
myobject.unimplementedmethod=methodimplementation
Is this the right path? Or should I be doing something different like perhaps creating a derived class first, implementing the methods in it, and then instantiating an object based on the derived class? What is best practice?
You need to subclass the base class:
class myclass():
def some_method():
raise NotImplementedError
class my_subclass(myclass):
def some_method():
print("method called")
You want to create a abstract base class. For that, you need to inherit abc.ABCMeta in your base class. Then defining the method as abstract, you need to decorate it with #abstractmethod. For example:
from abc import ABCMeta, abstractmethod
class BaseClass(ABCMeta):
#abstractmethod
def my_method():
pass
Then you may create the child class as:
class MyChildClass(BaseClass):
def my_method():
print 'my method'
The good way is using subclasses, but if you can't do it, here is a way to access to self from a simple function not defined in a class:
class Bar:
def __init__(self):
pass
def foo(self):
try:
self._foo(self)
except AttributeError:
raise NotImplementedError
def set_foo(self, function):
setattr(self, '_foo', function)
def another_method(self):
print "Another method from {}".format(self)
def foo(self):
self.another_method()
bar = Bar()
bar.set_foo(foo)
bar.foo()
So, def foo(self) define a function with a single argument self, like a method. This function call a instance method another_method.
Bar.set_foo create a new attribute _foo in instance of Bar.
Finally, Bar.foo try to access to self._foo with self as argument. If _foo is do not exists, Bar.foo will raise a NotImplementedError as expected.
Like it you can access to self from foo without subclasses.
I am developing a system, which has a series of single multilevel inheritance hierarachy. one of the methods (applicable to all the classes) has to perform the same thing for most of the classes, which is to pass a list to its parent class.
I know that if one doesn't define a method in one of the inherited classes, its parents' methods are used. But when we use the super method, we need to mention the name of the class being called.
One method I know to achieve this is to redefine the method at every class with class name as argument. Is there any elegant method where I can define it once at the topmost parent, and then override it only when necessary?
The implementation right now looks like this
class a(object):
def __init__(self):
self.myL = list()
print 'hello'
class b(a):
def __init__(self):
super(b,self).__init__()
def resolve(self, passVal):
print passVal
self.myL.append(passVal)
super(b,self).resolve(passVal+1)
class c(b):
def __init__(self):
super(c,self).__init__()
def resolve(self, passVal):
print passVal
self.myL.append(passVal)
super(c,self).resolve(passVal+1)
Instead if I can define resolve in class a, and then all other classes inherit the method from it. I understand a will never be able to use it. but redefining the method seems a lot unnecessary extra work.