I want to figure out the type of the class in which a certain method is defined (in essence, the enclosing static scope of the method), from within the method itself, and without specifying it explicitly, e.g.
class SomeClass:
def do_it(self):
cls = enclosing_class() # <-- I need this.
print(cls)
class DerivedClass(SomeClass):
pass
obj = DerivedClass()
# I want this to print 'SomeClass'.
obj.do_it()
Is this possible?
If you need this in Python 3.x, please see my other answer—the closure cell __class__ is all you need.
If you need to do this in CPython 2.6-2.7, RickyA's answer is close, but it doesn't work, because it relies on the fact that this method is not overriding any other method of the same name. Try adding a Foo.do_it method in his answer, and it will print out Foo, not SomeClass
The way to solve that is to find the method whose code object is identical to the current frame's code object:
def do_it(self):
mro = inspect.getmro(self.__class__)
method_code = inspect.currentframe().f_code
method_name = method_code.co_name
for base in reversed(mro):
try:
if getattr(base, method_name).func_code is method_code:
print(base.__name__)
break
except AttributeError:
pass
(Note that the AttributeError could be raised either by base not having something named do_it, or by base having something named do_it that isn't a function, and therefore doesn't have a func_code. But we don't care which; either way, base is not the match we're looking for.)
This may work in other Python 2.6+ implementations. Python does not require frame objects to exist, and if they don't, inspect.currentframe() will return None. And I'm pretty sure it doesn't require code objects to exist either, which means func_code could be None.
Meanwhile, if you want to use this in both 2.7+ and 3.0+, change that func_code to __code__, but that will break compatibility with earlier 2.x.
If you need CPython 2.5 or earlier, you can just replace the inpsect calls with the implementation-specific CPython attributes:
def do_it(self):
mro = self.__class__.mro()
method_code = sys._getframe().f_code
method_name = method_code.co_name
for base in reversed(mro):
try:
if getattr(base, method_name).func_code is method_code:
print(base.__name__)
break
except AttributeError:
pass
Note that this use of mro() will not work on classic classes; if you really want to handle those (which you really shouldn't want to…), you'll have to write your own mro function that just walks the hierarchy old-school… or just copy it from the 2.6 inspect source.
This will only work in Python 2.x implementations that bend over backward to be CPython-compatible… but that includes at least PyPy. inspect should be more portable, but then if an implementation is going to define frame and code objects with the same attributes as CPython's so it can support all of inspect, there's not much good reason not to make them attributes and provide sys._getframe in the first place…
First, this is almost certainly a bad idea, and not the way you want to solve whatever you're trying to solve but refuse to tell us about…
That being said, there is a very easy way to do it, at least in Python 3.0+. (If you need 2.x, see my other answer.)
Notice that Python 3.x's super pretty much has to be able to do this somehow. How else could super() mean super(THISCLASS, self), where that THISCLASS is exactly what you're asking for?*
Now, there are lots of ways that super could be implemented… but PEP 3135 spells out a specification for how to implement it:
Every function will have a cell named __class__ that contains the class object that the function is defined in.
This isn't part of the Python reference docs, so some other Python 3.x implementation could do it a different way… but at least as of 3.2+, they still have to have __class__ on functions, because Creating the class object explicitly says:
This class object is the one that will be referenced by the zero-argument form of super(). __class__ is an implicit closure reference created by the compiler if any methods in a class body refer to either __class__ or super. This allows the zero argument form of super() to correctly identify the class being defined based on lexical scoping, while the class or instance that was used to make the current call is identified based on the first argument passed to the method.
(And, needless to say, this is exactly how at least CPython 3.0-3.5 and PyPy3 2.0-2.1 implement super anyway.)
In [1]: class C:
...: def f(self):
...: print(__class__)
In [2]: class D(C):
...: pass
In [3]: D().f()
<class '__main__.C'>
Of course this gets the actual class object, not the name of the class, which is apparently what you were after. But that's easy; you just need to decide whether you mean __class__.__name__ or __class__.__qualname__ (in this simple case they're identical) and print that.
* In fact, this was one of the arguments against it: that the only plausible way to do this without changing the language syntax was to add a new closure cell to every function, or to require some horrible frame hacks which may not even be doable in other implementations of Python. You can't just use compiler magic, because there's no way the compiler can tell that some arbitrary expression will evaluate to the super function at runtime…
If you can use #abarnert's method, do it.
Otherwise, you can use some hardcore introspection (for python2.7):
import inspect
from http://stackoverflow.com/a/22898743/2096752 import getMethodClass
def enclosing_class():
frame = inspect.currentframe().f_back
caller_self = frame.f_locals['self']
caller_method_name = frame.f_code.co_name
return getMethodClass(caller_self.__class__, caller_method_name)
class SomeClass:
def do_it(self):
print(enclosing_class())
class DerivedClass(SomeClass):
pass
DerivedClass().do_it() # prints 'SomeClass'
Obviously, this is likely to raise an error if:
called from a regular function / staticmethod / classmethod
the calling function has a different name for self (as aptly pointed out by #abarnert, this can be solved by using frame.f_code.co_varnames[0])
Sorry for writing yet another answer, but here's how to do what you actually want to do, rather than what you asked for:
this is about adding instrumentation to a code base to be able to generate reports of method invocation counts, for the purpose of checking certain approximate runtime invariants (e.g. "the number of times that method ClassA.x() is executed is approximately equal to the number of times that method ClassB.y() is executed in the course of a run of a complicated program).
The way to do that is to make your instrumentation function inject the information statically. After all, it has to know the class and method it's injecting code into.
I will have to instrument many classes by hand, and to prevent mistakes I want to avoid typing the class names everywhere. In essence, it's the same reason why typing super() is preferable to typing super(ClassX, self).
If your instrumentation function is "do it manually", the very first thing you want to turn it into an actual function instead of doing it manually. Since you obviously only need static injection, using a decorator, either on the class (if you want to instrument every method) or on each method (if you don't) would make this nice and readable. (Or, if you want to instrument every method of every class, you might want to define a metaclass and have your root classes use it, instead of decorating every class.)
For example, here's an easy way to instrument every method of a class:
import collections
import functools
import inspect
_calls = {}
def inject(cls):
cls._calls = collections.Counter()
_calls[cls.__name__] = cls._calls
for name, method in cls.__dict__.items():
if inspect.isfunction(method):
#functools.wraps(method)
def wrapper(*args, **kwargs):
cls._calls[name] += 1
return method(*args, **kwargs)
setattr(cls, name, wrapper)
return cls
#inject
class A(object):
def f(self):
print('A.f here')
#inject
class B(A):
def f(self):
print('B.f here')
#inject
class C(B):
pass
#inject
class D(C):
def f(self):
print('D.f here')
d = D()
d.f()
B.f(d)
print(_calls)
The output:
{'A': Counter(),
'C': Counter(),
'B': Counter({'f': 1}),
'D': Counter({'f': 1})}
Exactly what you wanted, right?
You can either do what #mgilson suggested or take another approach.
class SomeClass:
pass
class DerivedClass(SomeClass):
pass
This makes SomeClass the base class for DerivedClass.
When you normally try to get the __class__.name__ then it will refer to derived class rather than the parent.
When you call do_it(), it's really passing DerivedClass as self, which is why you are most likely getting DerivedClass being printed.
Instead, try this:
class SomeClass:
pass
class DerivedClass(SomeClass):
def do_it(self):
for base in self.__class__.__bases__:
print base.__name__
obj = DerivedClass()
obj.do_it() # Prints SomeClass
Edit:
After reading your question a few more times I think I understand what you want.
class SomeClass:
def do_it(self):
cls = self.__class__.__bases__[0].__name__
print cls
class DerivedClass(SomeClass):
pass
obj = DerivedClass()
obj.do_it() # prints SomeClass
[Edited]
A somewhat more generic solution:
import inspect
class Foo:
pass
class SomeClass(Foo):
def do_it(self):
mro = inspect.getmro(self.__class__)
method_name = inspect.currentframe().f_code.co_name
for base in reversed(mro):
if hasattr(base, method_name):
print(base.__name__)
break
class DerivedClass(SomeClass):
pass
class DerivedClass2(DerivedClass):
pass
DerivedClass().do_it()
>> 'SomeClass'
DerivedClass2().do_it()
>> 'SomeClass'
SomeClass().do_it()
>> 'SomeClass'
This fails when some other class in the stack has attribute "do_it", since this is the signal name for stop walking the mro.
Related
I'd like a particular function to be callable as a classmethod, and to behave differently when it's called on an instance.
For example, if I have a class Thing, I want Thing.get_other_thing() to work, but also thing = Thing(); thing.get_other_thing() to behave differently.
I think overwriting the get_other_thing method on initialization should work (see below), but that seems a bit hacky. Is there a better way?
class Thing:
def __init__(self):
self.get_other_thing = self._get_other_thing_inst()
#classmethod
def get_other_thing(cls):
# do something...
def _get_other_thing_inst(self):
# do something else
Great question! What you seek can be easily done using descriptors.
Descriptors are Python objects which implement the descriptor protocol, usually starting with __get__().
They exist, mostly, to be set as a class attribute on different classes. Upon accessing them, their __get__() method is called, with the instance and owner class passed in.
class DifferentFunc:
"""Deploys a different function accroding to attribute access
I am a descriptor.
"""
def __init__(self, clsfunc, instfunc):
# Set our functions
self.clsfunc = clsfunc
self.instfunc = instfunc
def __get__(self, inst, owner):
# Accessed from class
if inst is None:
return self.clsfunc.__get__(None, owner)
# Accessed from instance
return self.instfunc.__get__(inst, owner)
class Test:
#classmethod
def _get_other_thing(cls):
print("Accessed through class")
def _get_other_thing_inst(inst):
print("Accessed through instance")
get_other_thing = DifferentFunc(_get_other_thing,
_get_other_thing_inst)
And now for the result:
>>> Test.get_other_thing()
Accessed through class
>>> Test().get_other_thing()
Accessed through instance
That was easy!
By the way, did you notice me using __get__ on the class and instance function? Guess what? Functions are also descriptors, and that's the way they work!
>>> def func(self):
... pass
...
>>> func.__get__(object(), object)
<bound method func of <object object at 0x000000000046E100>>
Upon accessing a function attribute, it's __get__ is called, and that's how you get function binding.
For more information, I highly suggest reading the Python manual and the "How-To" linked above. Descriptors are one of Python's most powerful features and are barely even known.
Why not set the function on instantiation?
Or Why not set self.func = self._func inside __init__?
Setting the function on instantiation comes with quite a few problems:
self.func = self._funccauses a circular reference. The instance is stored inside the function object returned by self._func. This on the other hand is stored upon the instance during the assignment. The end result is that the instance references itself and will clean up in a much slower and heavier manner.
Other code interacting with your class might attempt to take the function straight out of the class, and use __get__(), which is the usual expected method, to bind it. They will receive the wrong function.
Will not work with __slots__.
Although with descriptors you need to understand the mechanism, setting it on __init__ isn't as clean and requires setting multiple functions on __init__.
Takes more memory. Instead of storing one single function, you store a bound function for each and every instance.
Will not work with properties.
There are many more that I didn't add as the list goes on and on.
Here is a bit hacky solution:
class Thing(object):
#staticmethod
def get_other_thing():
return 1
def __getattribute__(self, name):
if name == 'get_other_thing':
return lambda: 2
return super(Thing, self).__getattribute__(name)
print Thing.get_other_thing() # 1
print Thing().get_other_thing() # 2
If we are on class, staticmethod is executed. If we are on instance, __getattribute__ is first to be executed, so we can return not Thing.get_other_thing but some other function (lambda in my case)
For a recursive function we can do:
def f(i):
if i<0: return
print i
f(i-1)
f(10)
However is there a way to do the following thing?
class A:
# do something
some_func(A)
# ...
If I understand your question correctly, you should be able to reference class A within class A by putting the type annotation in quotes. This is called forward reference.
class A:
# do something
def some_func(self, a: 'A')
# ...
See ref below
https://github.com/python/mypy/issues/3661
https://www.youtube.com/watch?v=AJsrxBkV3kc
In Python you cannot reference the class in the class body, although in languages like Ruby you can do it.
In Python instead you can use a class decorator but that will be called once the class has initialized. Another way could be to use metaclass but it depends on what you are trying to achieve.
You can't with the specific syntax you're describing due to the time at which they are evaluated. The reason the example function given works is that the call to f(i-1) within the function body is because the name resolution of f is not performed until the function is actually called. At this point f exists within the scope of execution since the function has already been evaluated. In the case of the class example, the reference to the class name is looked up during while the class definition is still being evaluated. As such, it does not yet exist in the local scope.
Alternatively, the desired behavior can be accomplished using a metaclass like such:
class MetaA(type):
def __init__(cls):
some_func(cls)
class A(object):
__metaclass__=MetaA
# do something
# ...
Using this approach you can perform arbitrary operations on the class object at the time that the class is evaluated.
Maybe you could try calling __class__.
Right now I'm writing a code that calls a class method from within the same class.
It is working well so far.
I'm creating the class methods using something like:
#classmethod
def my_class_method(cls):
return None
And calling then by using:
x = __class__.my_class_method()
It seems most of the answers here are outdated. From python3.7:
from __future__ import annotations
Example:
$ cat rec.py
from __future__ import annotations
class MyList:
def __init__(self,e):
self.data = [e]
def add(self, e):
self.data.append(e)
return self
def score(self, other:MyList):
return len([e
for e in self.data
if e in other.data])
print(MyList(8).add(3).add(4).score(MyList(4).add(9).add(3)))
$ python3.7 rec.py
2
Nope. It works in a function because the function contents are executed at call-time. But the class contents are executed at define-time, at which point the class doesn't exist yet.
It's not normally a problem because you can hack further members into the class after defining it, so you can split up a class definition into multiple parts:
class A(object):
spam= 1
some_func(A)
A.eggs= 2
def _A_scramble(self):
self.spam=self.eggs= 0
A.scramble= _A_scramble
It is, however, pretty unusual to want to call a function on the class in the middle of its own definition. It's not clear what you're trying to do, but chances are you'd be better off with decorators (or the relatively new class decorators).
There isn't a way to do that within the class scope, not unless A was defined to be something else first (and then some_func(A) will do something entirely different from what you expect)
Unless you're doing some sort of stack inspection to add bits to the class, it seems odd why you'd want to do that. Why not just:
class A:
# do something
pass
some_func(A)
That is, run some_func on A after it's been made. Alternately, you could use a class decorator (syntax for it was added in 2.6) or metaclass if you wanted to modify class A somehow. Could you clarify your use case?
If you want to do just a little hacky thing do
class A(object):
...
some_func(A)
If you want to do something more sophisticated you can use a metaclass. A metaclass is responsible for manipulating the class object before it gets fully created. A template would be:
class AType(type):
def __new__(meta, name, bases, dct):
cls = super(AType, meta).__new__(meta, name, bases, dct)
some_func(cls)
return cls
class A(object):
__metaclass__ = AType
...
type is the default metaclass. Instances of metaclasses are classes so __new__ returns a modified instance of (in this case) A.
For more on metaclasses, see http://docs.python.org/reference/datamodel.html#customizing-class-creation.
If the goal is to call a function some_func with the class as an argument, one answer is to declare some_func as a class decorator. Note that the class decorator is called after the class is initialized. It will be passed the class that is being decorated as an argument.
def some_func(cls):
# Do something
print(f"The answer is {cls.x}")
return cls # Don't forget to return the class
#some_func
class A:
x = 1
If you want to pass additional arguments to some_func you have to return a function from the decorator:
def some_other_func(prefix, suffix):
def inner(cls):
print(f"{prefix} {cls.__name__} {suffix}")
return cls
return inner
#some_other_func("Hello", " and goodbye!")
class B:
x = 2
Class decorators can be composed, which results in them being called in the reverse order they are declared:
#some_func
#some_other_func("Hello", "and goodbye!")
class C:
x = 42
The result of which is:
# Hello C and goodbye!
# The answer is 42
What do you want to achieve? It's possible to access a class to tweak its definition using a metaclass, but it's not recommended.
Your code sample can be written simply as:
class A(object):
pass
some_func(A)
If you want to refer to the same object, just use 'self':
class A:
def some_func(self):
another_func(self)
If you want to create a new object of the same class, just do it:
class A:
def some_func(self):
foo = A()
If you want to have access to the metaclass class object (most likely not what you want), again, just do it:
class A:
def some_func(self):
another_func(A) # note that it reads A, not A()
Do remember that in Python, type hinting is just for auto-code completion therefore it helps IDE to infer types and warn user before runtime. In runtime, type hints almost never used(except in some cases) so you can do something like this:
from typing import Any, Optional, NewType
LinkListType = NewType("LinkList", object)
class LinkList:
value: Any
_next: LinkListType
def set_next(self, ll: LinkListType):
self._next = ll
if __name__ == '__main__':
r = LinkList()
r.value = 1
r.set_next(ll=LinkList())
print(r.value)
And as you can see IDE successfully infers it's type as LinkList:
Note: Since the next can be None, hinting this in the type would be better, I just didn't want to confuse OP.
class LinkList:
value: Any
next: Optional[LinkListType]
It's ok to reference the name of the class inside its body (like inside method definitions) if it's actually in scope... Which it will be if it's defined at top level. (In other cases probably not, due to Python scoping quirks!).
For on illustration of the scoping gotcha, try to instantiate Foo:
class Foo(object):
class Bar(object):
def __init__(self):
self.baz = Bar.baz
baz = 15
def __init__(self):
self.bar = Foo.Bar()
(It's going to complain about the global name 'Bar' not being defined.)
Also, something tells me you may want to look into class methods: docs on the classmethod function (to be used as a decorator), a relevant SO question. Edit: Ok, so this suggestion may not be appropriate at all... It's just that the first thing I thought about when reading your question was stuff like alternative constructors etc. If something simpler suits your needs, steer clear of #classmethod weirdness. :-)
Most code in the class will be inside method definitions, in which case you can simply use the name A.
While integrating a Django app I have not used before, I found two different ways to define functions inside the class. The author seems to use them both distinctively and intentionally. The first one is the one that I myself use a lot:
class Dummy(object):
def some_function(self, *args, **kwargs):
# do something here
# self is the class instance
The other one is the one I never use, mostly because I do not understand when and what to use it for:
class Dummy(object):
#classmethod
def some_function(cls, *args, **kwargs):
# do something here
# cls refers to what?
The classmethod decorator in the python documentation says:
A class method receives the class as the implicit first argument, just
like an instance method receives the instance.
So I guess cls refers to Dummy itself (the class, not the instance). I do not exactly understand why this exists, because I could always do this:
type(self).do_something_with_the_class
Is this just for the sake of clarity, or did I miss the most important part: spooky and fascinating things that couldn't be done without it?
Your guess is correct - you understand how classmethods work.
The why is that these methods can be called both on an instance OR on the class (in both cases, the class object will be passed as the first argument):
class Dummy(object):
#classmethod
def some_function(cls,*args,**kwargs):
print cls
#both of these will have exactly the same effect
Dummy.some_function()
Dummy().some_function()
On the use of these on instances: There are at least two main uses for calling a classmethod on an instance:
self.some_function() will call the version of some_function on the actual type of self, rather than the class in which that call happens to appear (and won't need attention if the class is renamed); and
In cases where some_function is necessary to implement some protocol, but is useful to call on the class object alone.
The difference with staticmethod: There is another way of defining methods that don't access instance data, called staticmethod. That creates a method which does not receive an implicit first argument at all; accordingly it won't be passed any information about the instance or class on which it was called.
In [6]: class Foo(object): some_static = staticmethod(lambda x: x+1)
In [7]: Foo.some_static(1)
Out[7]: 2
In [8]: Foo().some_static(1)
Out[8]: 2
In [9]: class Bar(Foo): some_static = staticmethod(lambda x: x*2)
In [10]: Bar.some_static(1)
Out[10]: 2
In [11]: Bar().some_static(1)
Out[11]: 2
The main use I've found for it is to adapt an existing function (which doesn't expect to receive a self) to be a method on a class (or object).
One of the most common uses of classmethod in Python is factories, which are one of the most efficient methods to build an object. Because classmethods, like staticmethods, do not need the construction of a class instance. (But then if we use staticmethod, we would have to hardcode the instance class name in the function)
This blog does a great job of explaining it:
https://iscinumpy.gitlab.io/post/factory-classmethods-in-python/
If you add decorator #classmethod, That means you are going to make that method as static method of java or C++. ( static method is a general term I guess ;) )
Python also has #staticmethod. and difference between classmethod and staticmethod is whether you can
access to class or static variable using argument or classname itself.
class TestMethod(object):
cls_var = 1
#classmethod
def class_method(cls):
cls.cls_var += 1
print cls.cls_var
#staticmethod
def static_method():
TestMethod.cls_var += 1
print TestMethod.cls_var
#call each method from class itself.
TestMethod.class_method()
TestMethod.static_method()
#construct instances
testMethodInst1 = TestMethod()
testMethodInst2 = TestMethod()
#call each method from instances
testMethodInst1.class_method()
testMethodInst2.static_method()
all those classes increase cls.cls_var by 1 and print it.
And every classes using same name on same scope or instances constructed with these class is going to share those methods.
There's only one TestMethod.cls_var
and also there's only one TestMethod.class_method() , TestMethod.static_method()
And important question. why these method would be needed.
classmethod or staticmethod is useful when you make that class as a factory
or when you have to initialize your class only once. like open file once, and using feed method to read the file line by line.
This article has a snippet showing usage of __bases__ to dynamically change the inheritance hierarchy of some Python code, by adding a class to an existing classes collection of classes from which it inherits. Ok, that's hard to read, code is probably clearer:
class Friendly:
def hello(self):
print 'Hello'
class Person: pass
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
That is, Person doesn't inherit from Friendly at the source level, but rather this inheritance relation is added dynamically at runtime by modification of the __bases__attribute of the Person class. However, if you change Friendly and Person to be new style classes (by inheriting from object), you get the following error:
TypeError: __bases__ assignment: 'Friendly' deallocator differs from 'object'
A bit of Googling on this seems to indicate some incompatibilities between new-style and old style classes in regards to changing the inheritance hierarchy at runtime. Specifically: "New-style class objects don't support assignment to their bases attribute".
My question, is it possible to make the above Friendly/Person example work using new-style classes in Python 2.7+, possibly by use of the __mro__ attribute?
Disclaimer: I fully realise that this is obscure code. I fully realize that in real production code tricks like this tend to border on unreadable, this is purely a thought experiment, and for funzies to learn something about how Python deals with issues related to multiple inheritance.
Ok, again, this is not something you should normally do, this is for informational purposes only.
Where Python looks for a method on an instance object is determined by the __mro__ attribute of the class which defines that object (the M ethod R esolution O rder attribute). Thus, if we could modify the __mro__ of Person, we'd get the desired behaviour. Something like:
setattr(Person, '__mro__', (Person, Friendly, object))
The problem is that __mro__ is a readonly attribute, and thus setattr won't work. Maybe if you're a Python guru there's a way around that, but clearly I fall short of guru status as I cannot think of one.
A possible workaround is to simply redefine the class:
def modify_Person_to_be_friendly():
# so that we're modifying the global identifier 'Person'
global Person
# now just redefine the class using type(), specifying that the new
# class should inherit from Friendly and have all attributes from
# our old Person class
Person = type('Person', (Friendly,), dict(Person.__dict__))
def main():
modify_Person_to_be_friendly()
p = Person()
p.hello() # works!
What this doesn't do is modify any previously created Person instances to have the hello() method. For example (just modifying main()):
def main():
oldperson = Person()
ModifyPersonToBeFriendly()
p = Person()
p.hello()
# works! But:
oldperson.hello()
# does not
If the details of the type call aren't clear, then read e-satis' excellent answer on 'What is a metaclass in Python?'.
I've been struggling with this too, and was intrigued by your solution, but Python 3 takes it away from us:
AttributeError: attribute '__dict__' of 'type' objects is not writable
I actually have a legitimate need for a decorator that replaces the (single) superclass of the decorated class. It would require too lengthy a description to include here (I tried, but couldn't get it to a reasonably length and limited complexity -- it came up in the context of the use by many Python applications of an Python-based enterprise server where different applications needed slightly different variations of some of the code.)
The discussion on this page and others like it provided hints that the problem of assigning to __bases__ only occurs for classes with no superclass defined (i.e., whose only superclass is object). I was able to solve this problem (for both Python 2.7 and 3.2) by defining the classes whose superclass I needed to replace as being subclasses of a trivial class:
## T is used so that the other classes are not direct subclasses of object,
## since classes whose base is object don't allow assignment to their __bases__ attribute.
class T: pass
class A(T):
def __init__(self):
print('Creating instance of {}'.format(self.__class__.__name__))
## ordinary inheritance
class B(A): pass
## dynamically specified inheritance
class C(T): pass
A() # -> Creating instance of A
B() # -> Creating instance of B
C.__bases__ = (A,)
C() # -> Creating instance of C
## attempt at dynamically specified inheritance starting with a direct subclass
## of object doesn't work
class D: pass
D.__bases__ = (A,)
D()
## Result is:
## TypeError: __bases__ assignment: 'A' deallocator differs from 'object'
I can not vouch for the consequences, but that this code does what you want at py2.7.2.
class Friendly(object):
def hello(self):
print 'Hello'
class Person(object): pass
# we can't change the original classes, so we replace them
class newFriendly: pass
newFriendly.__dict__ = dict(Friendly.__dict__)
Friendly = newFriendly
class newPerson: pass
newPerson.__dict__ = dict(Person.__dict__)
Person = newPerson
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
We know that this is possible. Cool. But we'll never use it!
Right of the bat, all the caveats of messing with class hierarchy dynamically are in effect.
But if it has to be done then, apparently, there is a hack that get's around the "deallocator differs from 'object" issue when modifying the __bases__ attribute for the new style classes.
You can define a class object
class Object(object): pass
Which derives a class from the built-in metaclass type.
That's it, now your new style classes can modify the __bases__ without any problem.
In my tests this actually worked very well as all existing (before changing the inheritance) instances of it and its derived classes felt the effect of the change including their mro getting updated.
I needed a solution for this which:
Works with both Python 2 (>= 2.7) and Python 3 (>= 3.2).
Lets the class bases be changed after dynamically importing a dependency.
Lets the class bases be changed from unit test code.
Works with types that have a custom metaclass.
Still allows unittest.mock.patch to function as expected.
Here's what I came up with:
def ensure_class_bases_begin_with(namespace, class_name, base_class):
""" Ensure the named class's bases start with the base class.
:param namespace: The namespace containing the class name.
:param class_name: The name of the class to alter.
:param base_class: The type to be the first base class for the
newly created type.
:return: ``None``.
Call this function after ensuring `base_class` is
available, before using the class named by `class_name`.
"""
existing_class = namespace[class_name]
assert isinstance(existing_class, type)
bases = list(existing_class.__bases__)
if base_class is bases[0]:
# Already bound to a type with the right bases.
return
bases.insert(0, base_class)
new_class_namespace = existing_class.__dict__.copy()
# Type creation will assign the correct ‘__dict__’ attribute.
del new_class_namespace['__dict__']
metaclass = existing_class.__metaclass__
new_class = metaclass(class_name, tuple(bases), new_class_namespace)
namespace[class_name] = new_class
Used like this within the application:
# foo.py
# Type `Bar` is not available at first, so can't inherit from it yet.
class Foo(object):
__metaclass__ = type
def __init__(self):
self.frob = "spam"
def __unicode__(self): return "Foo"
# … later …
import bar
ensure_class_bases_begin_with(
namespace=globals(),
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
Use like this from within unit test code:
# test_foo.py
""" Unit test for `foo` module. """
import unittest
import mock
import foo
import bar
ensure_class_bases_begin_with(
namespace=foo.__dict__,
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
class Foo_TestCase(unittest.TestCase):
""" Test cases for `Foo` class. """
def setUp(self):
patcher_unicode = mock.patch.object(
foo.Foo, '__unicode__')
patcher_unicode.start()
self.addCleanup(patcher_unicode.stop)
self.test_instance = foo.Foo()
patcher_frob = mock.patch.object(
self.test_instance, 'frob')
patcher_frob.start()
self.addCleanup(patcher_frob.stop)
def test_instantiate(self):
""" Should create an instance of `Foo`. """
instance = foo.Foo()
The above answers are good if you need to change an existing class at runtime. However, if you are just looking to create a new class that inherits by some other class, there is a much cleaner solution. I got this idea from https://stackoverflow.com/a/21060094/3533440, but I think the example below better illustrates a legitimate use case.
def make_default(Map, default_default=None):
"""Returns a class which behaves identically to the given
Map class, except it gives a default value for unknown keys."""
class DefaultMap(Map):
def __init__(self, default=default_default, **kwargs):
self._default = default
super().__init__(**kwargs)
def __missing__(self, key):
return self._default
return DefaultMap
DefaultDict = make_default(dict, default_default='wug')
d = DefaultDict(a=1, b=2)
assert d['a'] is 1
assert d['b'] is 2
assert d['c'] is 'wug'
Correct me if I'm wrong, but this strategy seems very readable to me, and I would use it in production code. This is very similar to functors in OCaml.
This method isn't technically inheriting during runtime, since __mro__ can't be changed. But what I'm doing here is using __getattr__ to be able to access any attributes or methods from a certain class. (Read comments in order of numbers placed before the comments, it makes more sense)
class Sub:
def __init__(self, f, cls):
self.f = f
self.cls = cls
# 6) this method will pass the self parameter
# (which is the original class object we passed)
# and then it will fill in the rest of the arguments
# using *args and **kwargs
def __call__(self, *args, **kwargs):
# 7) the multiple try / except statements
# are for making sure if an attribute was
# accessed instead of a function, the __call__
# method will just return the attribute
try:
return self.f(self.cls, *args, **kwargs)
except TypeError:
try:
return self.f(*args, **kwargs)
except TypeError:
return self.f
# 1) our base class
class S:
def __init__(self, func):
self.cls = func
def __getattr__(self, item):
# 5) we are wrapping the attribute we get in the Sub class
# so we can implement the __call__ method there
# to be able to pass the parameters in the correct order
return Sub(getattr(self.cls, item), self.cls)
# 2) class we want to inherit from
class L:
def run(self, s):
print("run" + s)
# 3) we create an instance of our base class
# and then pass an instance (or just the class object)
# as a parameter to this instance
s = S(L) # 4) in this case, I'm using the class object
s.run("1")
So this sort of substitution and redirection will simulate the inheritance of the class we wanted to inherit from. And it even works with attributes or methods that don't take any parameters.
Sometimes self can denote the instance of the class and sometimes the class itself. So why don't we use inst and klass instead of self? Wouldn't that make things easier?
How things are now
class A:
#classmethod
def do(self): # self refers to class
..
class B:
def do(self): # self refers to instance of class
..
How I think they should be
class A:
#classmethod
def do(klass): # no ambiguity
..
class B:
def do(inst): # no ambiguity
..
So how come we don't program like this when in the zen of Python it is stated that explicit is better than implicit? Is there something that I am missing?
Class method support was added much later to Python, and the convention to use self for instances had already been established. Keeping that convention stable has more value than to switch to a longer name like instance.
The convention for class methods is to use the name cls:
class A:
#classmethod
def do(cls):
In other words, the conventions are already there to distinguish between a class object and the instance; never use self for class methods.
Also see PEP 8 - Function and method arguments:
Always use self for the first argument to instance methods.
Always use cls for the first argument to class methods.
I think it would be better to use "cls":
class A:
#classmethod
def do(cls): # cls refers to class
..
class B:
def do(self): # self refers to instance of class
..
It's requirement of PEP8:
http://legacy.python.org/dev/peps/pep-0008/#function-and-method-arguments
I think the point is that conventially you don't use self for methods wrapped with #classmethod. (You could write kls, cls, etc.)
There is ultimately nothing stopping you from writing inst instead of self if you so desire. So your second example would work fine and is actually the expected way to handle it (in terms of distinguishing an instance vs a class). However, you should definitely use self when dealing with instances. It's a Python convention and breaking it is strongly discouraged.
PEP8
Seeing as others have mentioned it, it's true PEP8 does say to use both self and cls in the case of instance and class methods, respectively. The only thing I'd add to this is that while there isn't any sensible reason to break this rule, changing self is significantly worse (from a semantic POV) because of its strong use inside of 99.999% of Python code. Its use is so universal that many (if not most) beginners assume it's a keyword and are confused by the idea that one can change self to anything.
This strong relationship to code and convention is not so apparent with class methods IMO. Of course I would urge anyone to follow PEP8 as much as possible, but if you felt inclined to use kls instead of cls, I feel that you'd be committing a lesser evil than if you changed self. However, whichever name you go with should remain consistent throughout your program.