When inheriting from a class, the child class is accessible on the parent via the .__subclasses__() method.
class BaseClass:
pass
class SubClass(BaseClass):
pass
BaseClass.__subclasses__()
# [<class '__main__.SubClass'>]
However, deleting the child class doesn't seem to remove it from the parent.
del SubClass
BaseClass.__subclasses__()
# [<class '__main__.SubClass'>]
Where does __subclasses__ get its information from? And can I manipulate it?
Or
Is there a proper way to remove a class and have its parent lose reference to it (e.g. BaseClass.remove_subclass(SubClass)?
The subclass contains references to itself internally, so it continues to exist until it is garbage collected. If you force a garbage collection cycle it will disappear from the __subclasses__():
import gc
gc.collect()
and then it has gone.
However make sure you have deleted all other references to the class before you force the garbage collection. For example, if you do it interactively and the last output was the subclass list there will still be a reference to the class in _.
class BaseClass:
pass
class SubClass(BaseClass):
pass
print(BaseClass.__subclasses__())
# [<class '__main__.SubClass'>]
del SubClass
import gc
gc.collect()
print(BaseClass.__subclasses__())
# []
Output with python 3.7 is:
[<class '__main__.SubClass'>]
[]
I should probably also add that while garbage collection works for this simple case you probably shouldn't depend on it in real life: it would be far too easy to accidentally keep a reference to the subclass somewhere in your code and then wonder why the class never goes away.
What you are trying to do here is keep a registry of subclasses so that the factory can return an object of the appropriate class. If you want to be able to add and remove classes from the registry then I think you have to be explicit. You could still use __subclasses__ to find candidate classes, but keep a flag on each class to show whether it is enabled. Then instead of just deleting the subclass set the flag to show the class is no longer in use and then (if you want) delete it.
Where does __subclasses__ get its information from?
For the CPython implementation of Python, the type object keeps a list of weak references under PyTypeObject.tp_subclasses. This is marked as "Not inherited. Internal use only" in the docs, so can be treated as an implementation detail of CPython. See also: How is __subclasses__ method implemented in CPython?.
And can I manipulate it?
Any class has a .__bases__ descriptor which, if changed, updates the references in PyTypeObject.tp_subclasses.
.__bases__ can only be manipulated when the class doesn't directly inherit from object. So while:
class BaseClass: pass
class OtherClass(BaseClass): pass
and
class BaseClass: pass
class OtherClass: pass
BaseClass.__bases__ = (OtherClass, )
# TypeError: __bases__ assignment: 'BaseClass' deallocator differs from 'object'
should be equivalent *. You will get an error. See: https://bugs.python.org/issue672115
You also can't use this to change a class to inherit from object.
class BaseClass: pass
class SubClass(BaseClass): pass
SubClass.__bases__ = (object,)
# TypeError: __bases__ assignment: 'type' object layout differs from 'BaseClass'
You can, however, change the bases of a class to be another class.
class BaseClass: pass
class SubClass(BaseClass): pass
class OtherClass: pass
SubClass.__bases__ = (OtherClass, )
# Or don't define it.
SubClass.__bases__ = (type("OtherClass", (object, ), {}), )
This all updates the parent class:
>>> BaseClass.__subclasses__()
[]
Related
I have a class with a private constant _BAR = object().
In a child class, outside of a method (no access to self), I want to refer to _BAR.
Here is a contrived example:
class Foo:
_BAR = object()
def __init__(self, bar: object = _BAR):
...
class DFoo(Foo):
"""Child class where I want to access private class variable from parent."""
def __init__(self, baz: object = super()._BAR):
super().__init__(baz)
Unfortunately, this doesn't work. One gets an error: RuntimeError: super(): no arguments
Is there a way to use super outside of a method to get a parent class attribute?
The workaround is to use Foo._BAR, I am wondering though if one can use super to solve this problem.
Inside of DFoo, you cannot refer to Foo._BAR without referring to Foo. Python variables are searched in the local, enclosing, global and built-in scopes (and in this order, it is the so called LEGB rule) and _BAR is not present in any of them.
Let's ignore an explicit Foo._BAR.
Further, it gets inherited: DFoo._BAR will be looked up first in DFoo, and when not found, in Foo.
What other means are there to get the Foo reference? Foo is a base class of DFoo. Can we use this relationship? Yes and no. Yes at execution time and no at definition time.
The problem is when the DFoo is being defined, it does not exist yet. We have no start point to start following the inheritance chain. This rules out an indirect reference (DFoo -> Foo) in a def method(self, ....): line and in a class attribute _DBAR = _BAR.
It is possible to work around this limitation using a class decorator. Define the class and then modify it:
def deco(cls):
cls._BAR = cls.__mro__[1]._BAR * 2 # __mro__[0] is the class itself
return cls
class Foo:
_BAR = 10
#deco
class DFoo(Foo):
pass
print(Foo._BAR, DFoo._BAR) # 10 20
Similar effect can be achieved with a metaclass.
The last option to get a reference to Foo is at execution time. We have the object self, its type is DFoo, and its parent type is Foo and there exists the _BAR. The well known super() is a shortcut to get the parent.
I have assumed only one base class for simplicity. If there were several base classes, super() returns only one of them. The example class decorator does the same. To understand how several bases are sorted to a sequence, see how the MRO works (Method Resolution Order).
My final thought is that I could not think up a use-case where such access as in the question would be required.
Short answer: you can't !
I'm not going into much details about super class itself here. (I've written a pure Python implementation in this gist if you like to read.)
But now let's see how we can call super:
1- Without arguments:
From PEP 3135:
This PEP proposes syntactic sugar for use of the super type to
automatically construct instances of the super type binding to the
class that a method was defined in, and the instance (or class object
for classmethods) that the method is currently acting upon.
The new syntax:
super()
is equivalent to:
super(__class__, <firstarg>)
...and <firstarg> is the first parameter of the method
So this is not an option because you don't have access to the "instance".
(Body of the function/methods is not executed unless it gets called, so no problem if DFoo doesn't exist yet inside the method definition)
2- super(type, instance)
From documentation:
The zero argument form only works inside a class definition, as the
compiler fills in the necessary details to correctly retrieve the
class being defined, as well as accessing the current instance for
ordinary methods.
What were those necessary details mentioned above? A "type" and A "instance":
We can't pass neither "instance" nor "type" which is DFoo here. The first one is because it's not inside the method so we don't have access to instance(self). Second one is DFoo itself. By the time the body of the DFoo class is being executed there is no reference to DFoo, it doesn't exist yet. The body of the class is executed inside a namespace which is a dictionary. After that a new instance of type type which is here named DFoo is created using that populated dictionary and added to the global namespaces. That's what class keyword roughly does in its simple form.
3- super(type, type):
If the second argument is a type, issubclass(type2, type) must be
true
Same reason mentioned in above about accessing the DFoo.
4- super(type):
If the second argument is omitted, the super object returned is
unbound.
If you have an unbound super object you can't do lookup(unless for the super object's attributes itself). Remember super() object is a descriptor. You can turn an unbound object to a bound object by calling __get__ and passing the instance:
class A:
a = 1
class B(A):
pass
class C(B):
sup = super(B)
try:
sup.a
except AttributeError as e:
print(e) # 'super' object has no attribute 'a'
obj = C()
print(obj.sup.a) # 1
obj.sup automatically calls the __get__.
And again same reason about accessing DFoo type mentioned above, nothing changed. Just added for records. These are the ways how we can call super.
I have a Python parent class with dozens of methods. These parent methods return a parent object.
Each of these methods is similar to a math operation on two objects (e.g. add(self,other), multiply(self,other), which returns the result of the operation as a new object of the same class.
I also have a child class, and its objects use all the parent methods. However, I need them to return the result as a new object of the child class not the parent class.
The child class has additional member variables and it has additional methods.
My first thought is to override each parent method with a child method that a) calls the eponymous parent method (child's add calls super's add) , b) converts the returned parent object into a new child object to set the additional child member variable, and c) returns the new child object.
Apart from the additional property that the child has over the parent, the conversion also allows me to perform type assertions to ensure I have submitted a child object as a function parameter, where required.
Maybe this is all par for the course. But it seems tedious, and cluttery, as I will have to write many such small overriding methods that all do the same thing (call the parent's method verbatim, convert the result).
What I also do not like about this approach is that if the parent is from a library used elsewhere, I'd have to write the overrides for each parent method. To future proof I'd even have to do this for methods I presently don't intend to use.
What are my alternatives? Or is there a better way to set up the classes in the first place, to avoid this?
It has crossed my mind to switch parent and child, but then this new child (formerly parent) will carry around a member variable that means nothing to it, and will have access to methods that make no sense to it.
I assume you have something like
class Parent:
def __add__(self, other):
return Parent(...)
when you probably want
class Parent:
def __add__(self, other):
return type(self)(...)
This allows the method to return a value whose type depends on its arguments (specifically, its first argument) rather than which class defined it.
Define the parent class' methods to take the class of self into consideration:
>>> class Parent:
... def __init__(self): pass
... def method(self): return self.__class__() # or type(self)()
...
>>> class Child(Parent): pass
...
>>> Child().method()
<__main__.Child object at 0x00000151EB7E0AF0>
>>> Parent().method()
<__main__.Parent object at 0x00000151EB977640>
I define a python class in python interpreter
class A:
pass
I get base class of A using A.__bases__, it shows
(object,)
but when I enter dir(A), the output don't contain __bases__ attribute, then I try dir(object), __bases__ is not found either, where does the __bases__ come from?
The __bases__ attribute in a class is implemented by a descriptor in the metaclass, type. You have to be a little careful though, since type, as one of the building blocks of the Python object model, is an instance of itself, and so type.__bases__ doesn't do what you would want for introspection.
Try this:
descriptor = type.__dict__['__bases__']
print(descriptor, type(descriptor))
You can reproduce the same kind of thing with your own descriptors:
class MyMeta(type):
#property # a property is a descriptor
def foo(cls):
return "foo"
class MyClass(metaclass=MyMeta):
pass
Now if you access MyClass.foo you'll get the string foo. But you won't see foo in the variables defined in MyClass (if you check with vars or dir). Nor can you access it through an instance of MyClass (my_obj = MyClass(); my_obj.foo raises an AttributeError).
it is a special attribute akin to __name__ or __dict__. While the result of function dir actually depends on the implementation of __dir__ function.
You might want to look it on the doc here https://docs.python.org/3/reference/datamodel.html
I am trying to inject a mixin to a class with a decorator. When the code runs the class no longer has a dict property even though the dir(instance) says it has one. I'm not sure where the property is disappearing. Is there a way that I can get dict or otherwise find the instance's attributes?
def testDecorator(cls):
return type(cls.__name__, (Mixin,) + cls.__bases__, dict(cls.__dict__))
class Mixin:
pass
#testDecorator
class dummyClass:
def __init__(self):
self.testVar1 = 'test'
self.testVar2 = 3.14
inst = dummyClass()
print(dir(inst))
print(inst.__dict__)
This code works if the decorator is commented out yet causes an error when the decorator is present. Running on python 3.5.1
It's not "losing __dict__". What's happening is that your original dummyClass has a __dict__ descriptor intended to retrieve the __dict__ attribute of instances of your original dummyClass, but your decorator puts that descriptor into a new dummyClass that doesn't descend from the original.
It's not safe to use the original __dict__ descriptor with instances of the new class, because there's no inheritance relationship, and instances of the new class could have their dict pointer at a different offset in their memory layout. To fix this, have your decorator create a class that descends from the original instead of copying its dict and bases:
def testDecorator(cls):
return type(cls.__name__, (Mixin, cls), {})
This article has a snippet showing usage of __bases__ to dynamically change the inheritance hierarchy of some Python code, by adding a class to an existing classes collection of classes from which it inherits. Ok, that's hard to read, code is probably clearer:
class Friendly:
def hello(self):
print 'Hello'
class Person: pass
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
That is, Person doesn't inherit from Friendly at the source level, but rather this inheritance relation is added dynamically at runtime by modification of the __bases__attribute of the Person class. However, if you change Friendly and Person to be new style classes (by inheriting from object), you get the following error:
TypeError: __bases__ assignment: 'Friendly' deallocator differs from 'object'
A bit of Googling on this seems to indicate some incompatibilities between new-style and old style classes in regards to changing the inheritance hierarchy at runtime. Specifically: "New-style class objects don't support assignment to their bases attribute".
My question, is it possible to make the above Friendly/Person example work using new-style classes in Python 2.7+, possibly by use of the __mro__ attribute?
Disclaimer: I fully realise that this is obscure code. I fully realize that in real production code tricks like this tend to border on unreadable, this is purely a thought experiment, and for funzies to learn something about how Python deals with issues related to multiple inheritance.
Ok, again, this is not something you should normally do, this is for informational purposes only.
Where Python looks for a method on an instance object is determined by the __mro__ attribute of the class which defines that object (the M ethod R esolution O rder attribute). Thus, if we could modify the __mro__ of Person, we'd get the desired behaviour. Something like:
setattr(Person, '__mro__', (Person, Friendly, object))
The problem is that __mro__ is a readonly attribute, and thus setattr won't work. Maybe if you're a Python guru there's a way around that, but clearly I fall short of guru status as I cannot think of one.
A possible workaround is to simply redefine the class:
def modify_Person_to_be_friendly():
# so that we're modifying the global identifier 'Person'
global Person
# now just redefine the class using type(), specifying that the new
# class should inherit from Friendly and have all attributes from
# our old Person class
Person = type('Person', (Friendly,), dict(Person.__dict__))
def main():
modify_Person_to_be_friendly()
p = Person()
p.hello() # works!
What this doesn't do is modify any previously created Person instances to have the hello() method. For example (just modifying main()):
def main():
oldperson = Person()
ModifyPersonToBeFriendly()
p = Person()
p.hello()
# works! But:
oldperson.hello()
# does not
If the details of the type call aren't clear, then read e-satis' excellent answer on 'What is a metaclass in Python?'.
I've been struggling with this too, and was intrigued by your solution, but Python 3 takes it away from us:
AttributeError: attribute '__dict__' of 'type' objects is not writable
I actually have a legitimate need for a decorator that replaces the (single) superclass of the decorated class. It would require too lengthy a description to include here (I tried, but couldn't get it to a reasonably length and limited complexity -- it came up in the context of the use by many Python applications of an Python-based enterprise server where different applications needed slightly different variations of some of the code.)
The discussion on this page and others like it provided hints that the problem of assigning to __bases__ only occurs for classes with no superclass defined (i.e., whose only superclass is object). I was able to solve this problem (for both Python 2.7 and 3.2) by defining the classes whose superclass I needed to replace as being subclasses of a trivial class:
## T is used so that the other classes are not direct subclasses of object,
## since classes whose base is object don't allow assignment to their __bases__ attribute.
class T: pass
class A(T):
def __init__(self):
print('Creating instance of {}'.format(self.__class__.__name__))
## ordinary inheritance
class B(A): pass
## dynamically specified inheritance
class C(T): pass
A() # -> Creating instance of A
B() # -> Creating instance of B
C.__bases__ = (A,)
C() # -> Creating instance of C
## attempt at dynamically specified inheritance starting with a direct subclass
## of object doesn't work
class D: pass
D.__bases__ = (A,)
D()
## Result is:
## TypeError: __bases__ assignment: 'A' deallocator differs from 'object'
I can not vouch for the consequences, but that this code does what you want at py2.7.2.
class Friendly(object):
def hello(self):
print 'Hello'
class Person(object): pass
# we can't change the original classes, so we replace them
class newFriendly: pass
newFriendly.__dict__ = dict(Friendly.__dict__)
Friendly = newFriendly
class newPerson: pass
newPerson.__dict__ = dict(Person.__dict__)
Person = newPerson
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
We know that this is possible. Cool. But we'll never use it!
Right of the bat, all the caveats of messing with class hierarchy dynamically are in effect.
But if it has to be done then, apparently, there is a hack that get's around the "deallocator differs from 'object" issue when modifying the __bases__ attribute for the new style classes.
You can define a class object
class Object(object): pass
Which derives a class from the built-in metaclass type.
That's it, now your new style classes can modify the __bases__ without any problem.
In my tests this actually worked very well as all existing (before changing the inheritance) instances of it and its derived classes felt the effect of the change including their mro getting updated.
I needed a solution for this which:
Works with both Python 2 (>= 2.7) and Python 3 (>= 3.2).
Lets the class bases be changed after dynamically importing a dependency.
Lets the class bases be changed from unit test code.
Works with types that have a custom metaclass.
Still allows unittest.mock.patch to function as expected.
Here's what I came up with:
def ensure_class_bases_begin_with(namespace, class_name, base_class):
""" Ensure the named class's bases start with the base class.
:param namespace: The namespace containing the class name.
:param class_name: The name of the class to alter.
:param base_class: The type to be the first base class for the
newly created type.
:return: ``None``.
Call this function after ensuring `base_class` is
available, before using the class named by `class_name`.
"""
existing_class = namespace[class_name]
assert isinstance(existing_class, type)
bases = list(existing_class.__bases__)
if base_class is bases[0]:
# Already bound to a type with the right bases.
return
bases.insert(0, base_class)
new_class_namespace = existing_class.__dict__.copy()
# Type creation will assign the correct ‘__dict__’ attribute.
del new_class_namespace['__dict__']
metaclass = existing_class.__metaclass__
new_class = metaclass(class_name, tuple(bases), new_class_namespace)
namespace[class_name] = new_class
Used like this within the application:
# foo.py
# Type `Bar` is not available at first, so can't inherit from it yet.
class Foo(object):
__metaclass__ = type
def __init__(self):
self.frob = "spam"
def __unicode__(self): return "Foo"
# … later …
import bar
ensure_class_bases_begin_with(
namespace=globals(),
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
Use like this from within unit test code:
# test_foo.py
""" Unit test for `foo` module. """
import unittest
import mock
import foo
import bar
ensure_class_bases_begin_with(
namespace=foo.__dict__,
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
class Foo_TestCase(unittest.TestCase):
""" Test cases for `Foo` class. """
def setUp(self):
patcher_unicode = mock.patch.object(
foo.Foo, '__unicode__')
patcher_unicode.start()
self.addCleanup(patcher_unicode.stop)
self.test_instance = foo.Foo()
patcher_frob = mock.patch.object(
self.test_instance, 'frob')
patcher_frob.start()
self.addCleanup(patcher_frob.stop)
def test_instantiate(self):
""" Should create an instance of `Foo`. """
instance = foo.Foo()
The above answers are good if you need to change an existing class at runtime. However, if you are just looking to create a new class that inherits by some other class, there is a much cleaner solution. I got this idea from https://stackoverflow.com/a/21060094/3533440, but I think the example below better illustrates a legitimate use case.
def make_default(Map, default_default=None):
"""Returns a class which behaves identically to the given
Map class, except it gives a default value for unknown keys."""
class DefaultMap(Map):
def __init__(self, default=default_default, **kwargs):
self._default = default
super().__init__(**kwargs)
def __missing__(self, key):
return self._default
return DefaultMap
DefaultDict = make_default(dict, default_default='wug')
d = DefaultDict(a=1, b=2)
assert d['a'] is 1
assert d['b'] is 2
assert d['c'] is 'wug'
Correct me if I'm wrong, but this strategy seems very readable to me, and I would use it in production code. This is very similar to functors in OCaml.
This method isn't technically inheriting during runtime, since __mro__ can't be changed. But what I'm doing here is using __getattr__ to be able to access any attributes or methods from a certain class. (Read comments in order of numbers placed before the comments, it makes more sense)
class Sub:
def __init__(self, f, cls):
self.f = f
self.cls = cls
# 6) this method will pass the self parameter
# (which is the original class object we passed)
# and then it will fill in the rest of the arguments
# using *args and **kwargs
def __call__(self, *args, **kwargs):
# 7) the multiple try / except statements
# are for making sure if an attribute was
# accessed instead of a function, the __call__
# method will just return the attribute
try:
return self.f(self.cls, *args, **kwargs)
except TypeError:
try:
return self.f(*args, **kwargs)
except TypeError:
return self.f
# 1) our base class
class S:
def __init__(self, func):
self.cls = func
def __getattr__(self, item):
# 5) we are wrapping the attribute we get in the Sub class
# so we can implement the __call__ method there
# to be able to pass the parameters in the correct order
return Sub(getattr(self.cls, item), self.cls)
# 2) class we want to inherit from
class L:
def run(self, s):
print("run" + s)
# 3) we create an instance of our base class
# and then pass an instance (or just the class object)
# as a parameter to this instance
s = S(L) # 4) in this case, I'm using the class object
s.run("1")
So this sort of substitution and redirection will simulate the inheritance of the class we wanted to inherit from. And it even works with attributes or methods that don't take any parameters.