Python rookie: Headaches with Object Oriented Programming - python

What is the difference between these two class declarations? What does "object" do?
class className(object):
pass
class className:
pass
Why do I get this error when I run the below code: "Takes no arguments (1 given)"
class Hobbs():
def represent():
print "Hobbs represent!"
represent = classmethod(represent)
Hobbs.represent()
Why does "Foo.class_foo()" give no error even though I did not pass an argument to the function.
class Foo(object):
#staticmethod
def static_foo():
print "static method"
#classmethod
def class_foo(cls):
print "Class method. Automatically passed the class: %s" % cls
Foo.static_foo()
Foo.class_foo()
Why do I get this error when I run the below code?
class Foo(object):
def static_foo():
print "static method"
static_foo = staticmethod(static_foo)
def class_foo(cls):
print "Class method. Automatically passed the class: %s" % cls
class_foo = classmethod(class_foo)
Foo.static_foo()
Foo.class_foo()
"TypeError: unbound method static_foo() must be called with Foo
instance as first argument (got nothing instead)"

Using object as the base class for new classes has been convention since at least Python 2.2, and is called "New-Style Classes" - see this question for more details. Old style classes (i.e.: ones that don't inherit from object) are set to be deprecated in Python 3.0. The reasons for these changes are somewhat obscure, and have to do with low-level class resolution and inheritance patterns.
Python instance methods, by convention, take self as their first argument. This argument is passed implicitly - so if your method definition doesn't take self, then the interpreter will complain that the method you're trying to call doesn't accept the argument that's being automatically passed to it. This works exactly the same for classmethods, only instead of taking self, they usually take cls. (Just a naming convention.) A quick fix:
class Hobbs():
def represent(cls):
print "Hobbs represent!"
represent = classmethod(represent)
Hobbs.represent()
Calling Foo.class_foo() doesn't cause any issues, as Python automatically passes the class object to the class_foo method whenever you call it. These methods are called bound methods - meaning that they are regular functions, but bound to a class or instance object. Bound methods automatically take the class or instance object that they're bound to as their first argument.
Indentation level matters in Python. I've tried executing the code sample you've provided, but both the static_foo = and class_foo = lines must be within the Foo class definition, rather than below it or within other methods. When indented properly, the code runs fine:
class Foo(object):
def static_foo():
print "static method"
static_foo = staticmethod(static_foo)
def class_foo(cls):
print "Class method. Automatically passed the class: %s" % cls
class_foo = classmethod(class_foo)
Foo.static_foo()
Foo.class_foo()

The last two are identical - empty brackets is the same as omitting them. The first inherits from the builtin class object, making it a "new style class". The reason for new and old style classes is historical, and old-style are only kept around for backward compatibility - essentially, in Python 2, the advice is to always inherit from object if you don't inherit from anything else, because some of the fancy tricks you will learn eventually rely on it. If you upgrade to Python 3, this becomes the default behaviour, and all three class declarations are equivalent.
A classmethod needs to take a first argument similar to self - when you call Hobbs.represent(), Python end up passing Hobbs in as that first argument. This is the fundamental difference between classmethod and staticmethod - a classmethod takes a first argument (being the class it was called on), a staticmethod doesn't.
Same as 2 - class is passed in to the classmethod in place of the usual self.
This one appears to be an indentation issue - your code works as written if it is indented as:
def static_foo():
print "static method"
static_foo = staticmethod(staticfoo)
but not as
def static_foo():
print "static method"
static_foo = staticmethod(staticfoo)
Because the line reassigning static_foo needs to be in the class body, not part of the function itself. In the latter, that line isn't executed until the function is run (which means it isn't run, since the function errors) - and it assigns a staticmethod to a local variable rather than to the method itself. This type of error is one of the reasons it is good to use the decorator syntax:
class Hobbs:
#staticmethod
def static_foo():
print "static method"
works.

all class functions must take self as the first argument
class A:
def my_func(self):
print "In my func"
static methods are classes that are pretty much just a function in a namespace (and are rarely used in python)
class methods are functions in the class namespace that should be called on the class itself rather than an instance

Most of your questions aren't really about object orientation per se, but Python's specific implementation of it.
Python 2.x has undergone some evolution, with new features being added. So there are two ways to define classes, resulting in a "new-style class", and and "old-style class". Python 3.x has only "new style class".
The base object of new-style classes is called object. If you inherit from that you have a new-style class. It gives you some of the extra features such as certain decorators. If you have a bare (no inheritance) definition in Python 2.x you have an old-style class. This exists for backwards compatibility. In Python 3.x you will also get a new-style class (so inheriting from object is optional there).
You have made represent() and "class method". So it will get the class object as implicit first argument when it is called. But those only work with new-style classes. you have tried to use it with an old-style class. So it won't work.
Python automatically inserts the class object as argument zero for a class method. So that is the correct pattern and it works.
the method somehow didn't get make into a class method, maybe because the indentation is wrong.

class ClassName(OtherClass): means that ClassName inherits from
OtherClass. inheritance is a big subject but basically it means that
ClassName has at least the same functions and fields as OtherClass.
In python, Everything is an object and therefor, all classes inherit
implicitely or explicitely from object. This being said
the class ClassName(): declaration is an old syntax and should be avoided.
class ClassName: is equivalent to class ClassName(object):
A class method is not a static method. It is like any other instance method except it is passed the class as parameter rather than the instance.
Your class method declaration is wrong. It should have a cls parameter.
A static method in the other hand, is a method that is called out of context. Meaning it has no relation with any instance. It can be thought of as a independent function that is simply put in a class for semantic reasons.
This is why it does not require a self parameter and one is never passed to it.
You have an indentation error. That might be causing the error.

Related

Getting private attribute in parent class using super(), outside of a method

I have a class with a private constant _BAR = object().
In a child class, outside of a method (no access to self), I want to refer to _BAR.
Here is a contrived example:
class Foo:
_BAR = object()
def __init__(self, bar: object = _BAR):
...
class DFoo(Foo):
"""Child class where I want to access private class variable from parent."""
def __init__(self, baz: object = super()._BAR):
super().__init__(baz)
Unfortunately, this doesn't work. One gets an error: RuntimeError: super(): no arguments
Is there a way to use super outside of a method to get a parent class attribute?
The workaround is to use Foo._BAR, I am wondering though if one can use super to solve this problem.
Inside of DFoo, you cannot refer to Foo._BAR without referring to Foo. Python variables are searched in the local, enclosing, global and built-in scopes (and in this order, it is the so called LEGB rule) and _BAR is not present in any of them.
Let's ignore an explicit Foo._BAR.
Further, it gets inherited: DFoo._BAR will be looked up first in DFoo, and when not found, in Foo.
What other means are there to get the Foo reference? Foo is a base class of DFoo. Can we use this relationship? Yes and no. Yes at execution time and no at definition time.
The problem is when the DFoo is being defined, it does not exist yet. We have no start point to start following the inheritance chain. This rules out an indirect reference (DFoo -> Foo) in a def method(self, ....): line and in a class attribute _DBAR = _BAR.
It is possible to work around this limitation using a class decorator. Define the class and then modify it:
def deco(cls):
cls._BAR = cls.__mro__[1]._BAR * 2 # __mro__[0] is the class itself
return cls
class Foo:
_BAR = 10
#deco
class DFoo(Foo):
pass
print(Foo._BAR, DFoo._BAR) # 10 20
Similar effect can be achieved with a metaclass.
The last option to get a reference to Foo is at execution time. We have the object self, its type is DFoo, and its parent type is Foo and there exists the _BAR. The well known super() is a shortcut to get the parent.
I have assumed only one base class for simplicity. If there were several base classes, super() returns only one of them. The example class decorator does the same. To understand how several bases are sorted to a sequence, see how the MRO works (Method Resolution Order).
My final thought is that I could not think up a use-case where such access as in the question would be required.
Short answer: you can't !
I'm not going into much details about super class itself here. (I've written a pure Python implementation in this gist if you like to read.)
But now let's see how we can call super:
1- Without arguments:
From PEP 3135:
This PEP proposes syntactic sugar for use of the super type to
automatically construct instances of the super type binding to the
class that a method was defined in, and the instance (or class object
for classmethods) that the method is currently acting upon.
The new syntax:
super()
is equivalent to:
super(__class__, <firstarg>)
...and <firstarg> is the first parameter of the method
So this is not an option because you don't have access to the "instance".
(Body of the function/methods is not executed unless it gets called, so no problem if DFoo doesn't exist yet inside the method definition)
2- super(type, instance)
From documentation:
The zero argument form only works inside a class definition, as the
compiler fills in the necessary details to correctly retrieve the
class being defined, as well as accessing the current instance for
ordinary methods.
What were those necessary details mentioned above? A "type" and A "instance":
We can't pass neither "instance" nor "type" which is DFoo here. The first one is because it's not inside the method so we don't have access to instance(self). Second one is DFoo itself. By the time the body of the DFoo class is being executed there is no reference to DFoo, it doesn't exist yet. The body of the class is executed inside a namespace which is a dictionary. After that a new instance of type type which is here named DFoo is created using that populated dictionary and added to the global namespaces. That's what class keyword roughly does in its simple form.
3- super(type, type):
If the second argument is a type, issubclass(type2, type) must be
true
Same reason mentioned in above about accessing the DFoo.
4- super(type):
If the second argument is omitted, the super object returned is
unbound.
If you have an unbound super object you can't do lookup(unless for the super object's attributes itself). Remember super() object is a descriptor. You can turn an unbound object to a bound object by calling __get__ and passing the instance:
class A:
a = 1
class B(A):
pass
class C(B):
sup = super(B)
try:
sup.a
except AttributeError as e:
print(e) # 'super' object has no attribute 'a'
obj = C()
print(obj.sup.a) # 1
obj.sup automatically calls the __get__.
And again same reason about accessing DFoo type mentioned above, nothing changed. Just added for records. These are the ways how we can call super.

Questions related to classes

I have a problem understanding some concepts of data structures in Python, in the following code.
class Stack(object): #1
def __init__(self): #2
self.items=[]
def isEmpty(self):
return self.items ==[]
def push(self,item):
self.items.append(item)
def pop(self):
self.items.pop()
def peak(self):
return self.items[len(self.items)-1]
def size(self):
return len(self.items)
s = Stack()
s.push(3)
s.push(7)
print(s.peak())
print (s.size())
s.pop()
print (s.size())
print (s.isEmpty())
I don't understand what is this object argument
I replaced it with (obj) and it generated an error, why?
I tried to remove it and it worked perfectly, why?
Why do I have __init__ to set a constructor?
self is an argument, but how does it get passed? and which object does it represent, the class it self?
Thanks.
object is a class, from which class Stack inherits. There is no
class obj, hence error. However, you can define a class that does
not inherit from anything (at least, in Python 2).
self represents an object on which the method is called; for
example when you do s.pop(), self inside method pop refers to
the same object as s - it is not a class, it is an instance of the class.
1
object here is the class your new class inherits from. There is already a base class named object, but there is no class named obj which is why replacing object with obj would cause an error. Anyway in your example code it is not needed at all since all classes in python 3 implicitly extends the object class.
2
__init__ is the constructor of the object and self there represents the object that you are creating itself, not the class, just like in the other methods you made.
Point 1:
Some history required here... Originally Python had two distinct kind of types, those implemented in C (whether in the stdlib or C extensions) and those implemented in Python with the class statement. Python 2.2 introduced a new object model (known as "new-style classes") to unify both, but kept the "classic" (aka "old-style") model for compatibility. This new model also introduced quite a lot of goodies like support for computed attributes, cooperative super calls via the super() object, metaclasses etc, all of which coming from the builtin object base class.
So in Python 2.2.x to 2.7.x, you can either create a new-style class by inheriting from object (or any subclass of object) or an old-style one by not inheriting from object (nor - obviously - any subclass of object).
In Python 2.7., since your example Stack class does not use any feature of the new object model, it works as well as an 'old-style' or as a 'new-style' class, but try to add a custom metaclass or a computed attribute and it will break in one way or another.
Python 3 totally removed old-style classes support and object is the defaut base class if you dont explicitely specify one, so whatever you do your class WILL inherit from object and will work as well with or without explicit parent class.
You can read this for more details.
Point 2.1 - I'm not sure I understand the question actually, but anyway:
In Python, objects are not fixed C-struct-like structures with a fixed set of attributes, but dict-like mappings (well there are exceptions but let's ignore them for the moment). The set of attributes of an object is composed of the class attributes (methods mainly but really any name defined at the class level) that are shared between all instances of the class, and instance attributes (belonging to a single instance) which are stored in the instance's __dict__. This imply that you dont define the instance attributes set at the class level (like in Java or C++ etc), but set them on the instance itself.
The __init__ method is there so you can make sure each instance is initialised with the desired set of attributes. It's kind of an equivalent of a Java constructor, but instead of being only used to pass arguments at instanciation, it's also responsible for defining the set of instance attributes for your class (which you would, in Java, define at the class level).
Point 2.2 : self is the current instance of the class (the instance on which the method is called), so if s is an instance of your Stack class, s.push(42) is equivalent to Stack.push(s, 42).
Note that the argument doesn't have to be called self (which is only a convention, albeit a very strong one), the important part is that it's the first argument.
How s get passed as self when calling s.push(42) is a bit intricate at first but an interesting example of how to use a small feature set to build a larger one. You can find a detailed explanation of the whole mechanism here, so I wont bother reposting it here.

When should one use a class method over instance method? [duplicate]

While integrating a Django app I have not used before, I found two different ways to define functions inside the class. The author seems to use them both distinctively and intentionally. The first one is the one that I myself use a lot:
class Dummy(object):
def some_function(self, *args, **kwargs):
# do something here
# self is the class instance
The other one is the one I never use, mostly because I do not understand when and what to use it for:
class Dummy(object):
#classmethod
def some_function(cls, *args, **kwargs):
# do something here
# cls refers to what?
The classmethod decorator in the python documentation says:
A class method receives the class as the implicit first argument, just
like an instance method receives the instance.
So I guess cls refers to Dummy itself (the class, not the instance). I do not exactly understand why this exists, because I could always do this:
type(self).do_something_with_the_class
Is this just for the sake of clarity, or did I miss the most important part: spooky and fascinating things that couldn't be done without it?
Your guess is correct - you understand how classmethods work.
The why is that these methods can be called both on an instance OR on the class (in both cases, the class object will be passed as the first argument):
class Dummy(object):
#classmethod
def some_function(cls,*args,**kwargs):
print cls
#both of these will have exactly the same effect
Dummy.some_function()
Dummy().some_function()
On the use of these on instances: There are at least two main uses for calling a classmethod on an instance:
self.some_function() will call the version of some_function on the actual type of self, rather than the class in which that call happens to appear (and won't need attention if the class is renamed); and
In cases where some_function is necessary to implement some protocol, but is useful to call on the class object alone.
The difference with staticmethod: There is another way of defining methods that don't access instance data, called staticmethod. That creates a method which does not receive an implicit first argument at all; accordingly it won't be passed any information about the instance or class on which it was called.
In [6]: class Foo(object): some_static = staticmethod(lambda x: x+1)
In [7]: Foo.some_static(1)
Out[7]: 2
In [8]: Foo().some_static(1)
Out[8]: 2
In [9]: class Bar(Foo): some_static = staticmethod(lambda x: x*2)
In [10]: Bar.some_static(1)
Out[10]: 2
In [11]: Bar().some_static(1)
Out[11]: 2
The main use I've found for it is to adapt an existing function (which doesn't expect to receive a self) to be a method on a class (or object).
One of the most common uses of classmethod in Python is factories, which are one of the most efficient methods to build an object. Because classmethods, like staticmethods, do not need the construction of a class instance. (But then if we use staticmethod, we would have to hardcode the instance class name in the function)
This blog does a great job of explaining it:
https://iscinumpy.gitlab.io/post/factory-classmethods-in-python/
If you add decorator #classmethod, That means you are going to make that method as static method of java or C++. ( static method is a general term I guess ;) )
Python also has #staticmethod. and difference between classmethod and staticmethod is whether you can
access to class or static variable using argument or classname itself.
class TestMethod(object):
cls_var = 1
#classmethod
def class_method(cls):
cls.cls_var += 1
print cls.cls_var
#staticmethod
def static_method():
TestMethod.cls_var += 1
print TestMethod.cls_var
#call each method from class itself.
TestMethod.class_method()
TestMethod.static_method()
#construct instances
testMethodInst1 = TestMethod()
testMethodInst2 = TestMethod()
#call each method from instances
testMethodInst1.class_method()
testMethodInst2.static_method()
all those classes increase cls.cls_var by 1 and print it.
And every classes using same name on same scope or instances constructed with these class is going to share those methods.
There's only one TestMethod.cls_var
and also there's only one TestMethod.class_method() , TestMethod.static_method()
And important question. why these method would be needed.
classmethod or staticmethod is useful when you make that class as a factory
or when you have to initialize your class only once. like open file once, and using feed method to read the file line by line.

What is the functionality difference between the Reference of a class and its object/instance in python while calling its objects?

I was searching for the meaning of default parameters object,self that are present as default class and function parameters, so moving away from it, if we are calling an attribute of a class should we use Foo (class reference) or should we use Foo() (instance of the class).
If you are reading a normal attribute then it doesn't matter. If you are binding a normal attribute then you must use the correct one in order for the code to work. If you are accessing a descriptor then you must use an instance.
The details of python's class semantics are quite well documented in the data model. Especially the __get__ semantics are at work here. Instances basically stack their namespace on top of their class' namespace and add some boilerplate for calling methods.
There are some large "it depends on what you are doing" gotchas at work here. The most important question: do you want to access class or instance attributes? Second, do you want attribute or methods?
Let's take this example:
class Foo(object):
bar = 1
baz = 2
def __init__(self, foobar="barfoo", baz=3):
self.foobar = foobar
self.baz = baz
def meth(self, param):
print self, param
#classmethod
def clsmeth(cls, param):
print cls, param
#staticmethod
def stcmeth(param):
print param
Here, bar is a class attribute, so you can get it via Foo.bar. Since instances have implicit access to their class namespace, you can also get it as Foo().bar. foobar is an instance attribute, since it is never bound to the class (only instances, i.e. selfs) - you can only get it as Foo().foobar. Last, baz is both a class and an instance attribute. By default, Foo.baz == 2 and Foo().baz == 3, since the class attribute is hidden by the instance attribute set in __init__.
Similarly, in an assignment there are slight differences whether you work on the class or an instance. Foo.bar=2 will set the class attribute (also for all instances) while Foo().bar=2 will create an instance attribute that shadows the class attribute for this specific instance.
For methods, it is somewhat similar. However, here you get the implicit self parameter for instance method (what a function is if defined for a class). Basically, the call Foo().meth(param=x) is silently translated to Foo.meth(self=Foo(), param=x). This is why it is usually not valid to call Foo.meth(param=x) - meth is not "bound" to an instance and thus lacks the self parameter.
Now, sometimes you do not need any instance data in a method - for example, you have strict string transformation that is an implementation detail of a larger parser class. This is where #classmethod and #staticmethod come into play. A classmethod's first parameter is always the class, as opposed to the instance for regular methods. Foo().clsmeth(param=x) and Foo.clsmeth(param=x) result in a call of clsmethod(cls=Foo, param=x). Here, the two are equivalent. Going one step further, a staticmethod doesn't get any class or instance information - it is like a raw function bound to the classes namespace.

Find class in which a method is defined

I want to figure out the type of the class in which a certain method is defined (in essence, the enclosing static scope of the method), from within the method itself, and without specifying it explicitly, e.g.
class SomeClass:
def do_it(self):
cls = enclosing_class() # <-- I need this.
print(cls)
class DerivedClass(SomeClass):
pass
obj = DerivedClass()
# I want this to print 'SomeClass'.
obj.do_it()
Is this possible?
If you need this in Python 3.x, please see my other answer—the closure cell __class__ is all you need.
If you need to do this in CPython 2.6-2.7, RickyA's answer is close, but it doesn't work, because it relies on the fact that this method is not overriding any other method of the same name. Try adding a Foo.do_it method in his answer, and it will print out Foo, not SomeClass
The way to solve that is to find the method whose code object is identical to the current frame's code object:
def do_it(self):
mro = inspect.getmro(self.__class__)
method_code = inspect.currentframe().f_code
method_name = method_code.co_name
for base in reversed(mro):
try:
if getattr(base, method_name).func_code is method_code:
print(base.__name__)
break
except AttributeError:
pass
(Note that the AttributeError could be raised either by base not having something named do_it, or by base having something named do_it that isn't a function, and therefore doesn't have a func_code. But we don't care which; either way, base is not the match we're looking for.)
This may work in other Python 2.6+ implementations. Python does not require frame objects to exist, and if they don't, inspect.currentframe() will return None. And I'm pretty sure it doesn't require code objects to exist either, which means func_code could be None.
Meanwhile, if you want to use this in both 2.7+ and 3.0+, change that func_code to __code__, but that will break compatibility with earlier 2.x.
If you need CPython 2.5 or earlier, you can just replace the inpsect calls with the implementation-specific CPython attributes:
def do_it(self):
mro = self.__class__.mro()
method_code = sys._getframe().f_code
method_name = method_code.co_name
for base in reversed(mro):
try:
if getattr(base, method_name).func_code is method_code:
print(base.__name__)
break
except AttributeError:
pass
Note that this use of mro() will not work on classic classes; if you really want to handle those (which you really shouldn't want to…), you'll have to write your own mro function that just walks the hierarchy old-school… or just copy it from the 2.6 inspect source.
This will only work in Python 2.x implementations that bend over backward to be CPython-compatible… but that includes at least PyPy. inspect should be more portable, but then if an implementation is going to define frame and code objects with the same attributes as CPython's so it can support all of inspect, there's not much good reason not to make them attributes and provide sys._getframe in the first place…
First, this is almost certainly a bad idea, and not the way you want to solve whatever you're trying to solve but refuse to tell us about…
That being said, there is a very easy way to do it, at least in Python 3.0+. (If you need 2.x, see my other answer.)
Notice that Python 3.x's super pretty much has to be able to do this somehow. How else could super() mean super(THISCLASS, self), where that THISCLASS is exactly what you're asking for?*
Now, there are lots of ways that super could be implemented… but PEP 3135 spells out a specification for how to implement it:
Every function will have a cell named __class__ that contains the class object that the function is defined in.
This isn't part of the Python reference docs, so some other Python 3.x implementation could do it a different way… but at least as of 3.2+, they still have to have __class__ on functions, because Creating the class object explicitly says:
This class object is the one that will be referenced by the zero-argument form of super(). __class__ is an implicit closure reference created by the compiler if any methods in a class body refer to either __class__ or super. This allows the zero argument form of super() to correctly identify the class being defined based on lexical scoping, while the class or instance that was used to make the current call is identified based on the first argument passed to the method.
(And, needless to say, this is exactly how at least CPython 3.0-3.5 and PyPy3 2.0-2.1 implement super anyway.)
In [1]: class C:
...: def f(self):
...: print(__class__)
In [2]: class D(C):
...: pass
In [3]: D().f()
<class '__main__.C'>
Of course this gets the actual class object, not the name of the class, which is apparently what you were after. But that's easy; you just need to decide whether you mean __class__.__name__ or __class__.__qualname__ (in this simple case they're identical) and print that.
* In fact, this was one of the arguments against it: that the only plausible way to do this without changing the language syntax was to add a new closure cell to every function, or to require some horrible frame hacks which may not even be doable in other implementations of Python. You can't just use compiler magic, because there's no way the compiler can tell that some arbitrary expression will evaluate to the super function at runtime…
If you can use #abarnert's method, do it.
Otherwise, you can use some hardcore introspection (for python2.7):
import inspect
from http://stackoverflow.com/a/22898743/2096752 import getMethodClass
def enclosing_class():
frame = inspect.currentframe().f_back
caller_self = frame.f_locals['self']
caller_method_name = frame.f_code.co_name
return getMethodClass(caller_self.__class__, caller_method_name)
class SomeClass:
def do_it(self):
print(enclosing_class())
class DerivedClass(SomeClass):
pass
DerivedClass().do_it() # prints 'SomeClass'
Obviously, this is likely to raise an error if:
called from a regular function / staticmethod / classmethod
the calling function has a different name for self (as aptly pointed out by #abarnert, this can be solved by using frame.f_code.co_varnames[0])
Sorry for writing yet another answer, but here's how to do what you actually want to do, rather than what you asked for:
this is about adding instrumentation to a code base to be able to generate reports of method invocation counts, for the purpose of checking certain approximate runtime invariants (e.g. "the number of times that method ClassA.x() is executed is approximately equal to the number of times that method ClassB.y() is executed in the course of a run of a complicated program).
The way to do that is to make your instrumentation function inject the information statically. After all, it has to know the class and method it's injecting code into.
I will have to instrument many classes by hand, and to prevent mistakes I want to avoid typing the class names everywhere. In essence, it's the same reason why typing super() is preferable to typing super(ClassX, self).
If your instrumentation function is "do it manually", the very first thing you want to turn it into an actual function instead of doing it manually. Since you obviously only need static injection, using a decorator, either on the class (if you want to instrument every method) or on each method (if you don't) would make this nice and readable. (Or, if you want to instrument every method of every class, you might want to define a metaclass and have your root classes use it, instead of decorating every class.)
For example, here's an easy way to instrument every method of a class:
import collections
import functools
import inspect
_calls = {}
def inject(cls):
cls._calls = collections.Counter()
_calls[cls.__name__] = cls._calls
for name, method in cls.__dict__.items():
if inspect.isfunction(method):
#functools.wraps(method)
def wrapper(*args, **kwargs):
cls._calls[name] += 1
return method(*args, **kwargs)
setattr(cls, name, wrapper)
return cls
#inject
class A(object):
def f(self):
print('A.f here')
#inject
class B(A):
def f(self):
print('B.f here')
#inject
class C(B):
pass
#inject
class D(C):
def f(self):
print('D.f here')
d = D()
d.f()
B.f(d)
print(_calls)
The output:
{'A': Counter(),
'C': Counter(),
'B': Counter({'f': 1}),
'D': Counter({'f': 1})}
Exactly what you wanted, right?
You can either do what #mgilson suggested or take another approach.
class SomeClass:
pass
class DerivedClass(SomeClass):
pass
This makes SomeClass the base class for DerivedClass.
When you normally try to get the __class__.name__ then it will refer to derived class rather than the parent.
When you call do_it(), it's really passing DerivedClass as self, which is why you are most likely getting DerivedClass being printed.
Instead, try this:
class SomeClass:
pass
class DerivedClass(SomeClass):
def do_it(self):
for base in self.__class__.__bases__:
print base.__name__
obj = DerivedClass()
obj.do_it() # Prints SomeClass
Edit:
After reading your question a few more times I think I understand what you want.
class SomeClass:
def do_it(self):
cls = self.__class__.__bases__[0].__name__
print cls
class DerivedClass(SomeClass):
pass
obj = DerivedClass()
obj.do_it() # prints SomeClass
[Edited]
A somewhat more generic solution:
import inspect
class Foo:
pass
class SomeClass(Foo):
def do_it(self):
mro = inspect.getmro(self.__class__)
method_name = inspect.currentframe().f_code.co_name
for base in reversed(mro):
if hasattr(base, method_name):
print(base.__name__)
break
class DerivedClass(SomeClass):
pass
class DerivedClass2(DerivedClass):
pass
DerivedClass().do_it()
>> 'SomeClass'
DerivedClass2().do_it()
>> 'SomeClass'
SomeClass().do_it()
>> 'SomeClass'
This fails when some other class in the stack has attribute "do_it", since this is the signal name for stop walking the mro.

Categories