Based on this answer, of how __new__ and __init__ are supposed to work in Python,
I wrote this code to dynamically define and create a new class and object.
class A(object):
def __new__(cls):
class C(cls, B):
pass
self = C()
return self
def foo(self):
print 'foo'
class B(object):
def bar(self):
print 'bar'
a = A()
a.foo()
a.bar()
Basically, because the __new__ of A returns a dynamically created C that inherits A and B, it should have an attribute bar.
Why does C not have a bar attribute?
Resolve the infinite recursion:
class A(object):
def __new__(cls):
class C(cls, B):
pass
self = object.__new__(C)
return self
(Thanks to balpha for pointing out the actual question.)
Since there is no actual question in the question, I am going to take it literally:
Whats wrong doing it dynamically?
Well, it is practically unreadable, extremely opaque and non-obvious to the user of your code (that includes you in a month :P).
From my experience (quite limited, I must admit, unfortunately I don't have 20 years of programming under the belt), a need for such solutions indicates, that the class structure is not well defined, - means, there's almost always a better, more readable and less arcane way to do such things.
For example, if you really want to define base classes on the fly, you are better off using a factory function, that will return appropriate classes according to your needs.
Another take on the question:
Whats wrong doing it dynamically?
In your current implementation, it gives me a "maximum recursion depth exceeded" error. That happens, because A.__new__ calls itself from within itself indefinitely (since it inherits from itself and from B).
10: Inside A.__new__, "cls" is set to <class '.A'>. Inside the constructor you define a class C, which inherits from cls (which is actually A) and another class B. Upon instantiating C, its __new__ is called. Since it doesn't define its own __new__, its base class' __new__ is called. The base class just happens to be A.
20: GOTO 10
If your question is "How can I accomplish this" – this works:
class A(object):
#classmethod
def get_with_B(cls):
class C(B, cls):
pass
return C()
def foo(self):
print 'foo'
class B(object):
def bar(self):
print 'bar'
a = A.get_with_B()
a.foo()
a.bar()
If your question is "Why doesn't it work" – that's because you run into an infinite recursion when you call C(), which leads to A.__new__ being called, which again calls C() etc.
Related
If we override parent class's method, we can use super() to avoid mention of parent class's name - that's clear.
But what about case, when we just use in subclass some function defined in parent class? What is preferable way: to use super().parent_method() or self.parent_method()? Or there's no difference?
class A:
def test1(self):
pass
def test_a(self):
pass
class B(A):
def test1(self):
super().test1() # That's clear.
def test_b(self):
# Which option is better, when I want to use parent's func here?
# 1) super().test_a()
# 2) self.test_a()
pass
Usually you will want to use self.test_a() to call an inherited method. However, in some rare situations you might want to use super().test_a() even though it seems to do the same thing. They're not equivalent, even though they have the same behavior in your example.
To explore the differences, lets make two versions of your B class, one with each kind of call, then make two C classes that further extend the B classes and override test_a:
class A(object):
def test_a(self):
return "A"
class B1(A):
def test_b(self):
return self.test_a() + "B"
class B2(A):
def test_b(self):
return super().test_a() + "B"
class C1(B1):
def test_a(self):
return "C"
class C2(B2):
def test_a(self):
return "C"
When you call the test_b() method on C1 and C2 instances you'll get different results, even though B1 and B2 behave the same:
>>> B1().test_b()
'AB'
>>> B2().test_b()
'AB'
>>> C1().test_b()
'CB'
>>> C2().test_b()
'AB'
This is because the super() call in B2.test_b tells Python that you want to skip the version of test_a in any more derived class and always call an implementation from a parent class. (Actually, I suppose it could be a sibling class in a multiple inheritance situation, but that's getting even more obscure.)
Like I said at the top, you usually want to allow a more-derived class like the Cs to override the behavior of the inherited methods you're calling in your less-derived class. That means that most of the time using self.whatever is the way to go. You only need to use super when you're doing something fancy.
Since B is an A, it has a member test_a. So you call it as
self.test_a()
B does not overwrite A.test_a so there is no need to use super() to call it.
Since B overwrites A.test1, you must explicitly name the method you want to call.
self.test1()
will call B.test1, while
super().test1()
will call A.test1.
Firstly both the ways super().test_a() and self.test_a() will result in execution of method test_a().
Since Class B does not override or overwrite test_a() I think use of self.test_a() will be much efficient as self is a mere reference to the current object which is there in memory.
As per documentation, super() results in creation of proxy object which contains other methods also. Owing to this reason I feel self will be the correct approach in your case.
If we override parent class's method, we can use super() to avoid
mention of parent class's name - that's clear.
actually super() is not syntactic sugar, its purpose is to invoke parent implementation of a certain method.
You have to use super() when you want to override a parent method, you don't have to use super() when instead you want to overwrite a method. The difference is that in the first case you want to add extra behavior (aka code execution) before or after the original implementation, in the second you want a completely different implementation.
You can't use self.method_name() in an override, the result will be a recursion error! (RuntimeError: maximum recursion depth exceeded)
Example:
class A:
def m(self):
print('A.m implementation')
class B(A):
def m(self):
super().m()
print('B.m implementation')
class C(A):
def m(self):
print('C.m implementation')
class D(A):
def m(self):
self.m()
a = A()
a.m()
b = B()
b.m()
c = C()
c.m()
d = D()
d.m()
Given a base class A, with a method m, B extends A by overriding m, C extends A by overwriting m, and D generates an error!
EDIT:
I just realized that you actually have 2 different methods (test_a and test_b). My answer is still valid, but regarding your specific scenario:
you should use self.test_a() unless you override/overwrite that method in your class B and you want to execute the original implementation... so we can say that calling super().test_a() or self.test_a() it's the same given that you'll never override/overwrite the original test_a() in your subclasses... however is a nonsense to use super() if not for an override/overwrite
I've seen a bunch of the python method resolution order questions on Stack Overflow, many of which are excellently answered. I have one that does not quite fit.
When requesting super(MyClassName, self).method_name, I get a type that is not returned by the (single) parent class. Putting debug into the parent class shows that it isn't hit.
I would add some code snippets, but the codebase is massive. I have been into every class listed from MyClassName.__mro__ (which tells us what the method resolution order is) and NONE of them return the type I'm getting. So the question is...
What tool or attribute in Python can I use to find out what code is actually being called so that if this happens again I can easily find out what is actually being called? I ended up finding the solution, but I'd rather know how to tackle it in a less labour intensive manner.
You can use e.g. inspect.getmodule to, per its documentation:
Try to guess which module an object was defined in.
A simple example, with a.py:
class Parent(object):
def method(self):
return True
and b.py:
import inspect
from a import Parent
class Child(Parent):
def method(self):
parent_method = super(Child, self).method # get the parent method
print "inherited method defined in {}".format(
inspect.getmodule(parent_method), # and find out where it came from
)
return parent_method()
if __name__ == '__main__':
Child().method()
Running b.py gives the result:
Parent defined in <module 'a' from 'C:/Python27\a.py'>
I think you might be getting confused between what a bound method is, and what the method resolution order is.
The returned method still counts as a bound method of the class of the actual object, even if function the method derives from is found on the parent class. This is because the method has been bound to an instance of the child class as opposed to the parent class.
eg.
class A:
def f(self):
return "A"
class B(A):
def g(self):
return super().f
class C(B):
def f(self):
return "C"
c = C()
method = c.g()
print(method) # prints <bound method C.f of <__main__.C object at 0x02D4FA10>>
print(method()) # prints A
In this instance, c.g() returns the function A.f bound to an instance of C.
To find the actual function that the bound method will call just examine the __func__ attribute:
assert method.__func__ is A.f
Dive into Python -
Guido, the original author of Python, explains method overriding this way: "Derived classes may override methods of their base classes. Because methods have no special privileges when calling other methods of the same object, a method of a base class that calls another method defined in the same base class, may in fact end up calling a method of a derived class that overrides it. (For C++ programmers: all methods in Python are effectively virtual.)" If that doesn't make sense to you (it confuses the hell out of me), feel free to ignore it. I just thought I'd pass it along.
I am trying to figure out an example for: a method of a base class that calls another method defined in the same base class, may in fact end up calling a method of a derived class that overrides it
class A:
def foo(self): print 'A.foo'
def bar(self): self.foo()
class B(A):
def foo(self): print 'B.foo'
if __name__ == '__main__':
a = A()
a.bar() # echoes A.foo
b = B()
b.bar() # echoes B.foo
... but both of these seem kind of obvious.
am I missing something that was hinted out in the quote?
UPDATE
edited typo of calling a.foo() (instead of a.bar())and b.foo() (instead of b.bar()) in the original code
Yes, you're missing this:
b.bar() # echoes B.foo
B has no bar method of its own, just the one inherited from A. A's bar calls self.foo, but in an instance of B ends up calling B's foo, and not A's foo.
Let's look at your quote again:
a method of a base class that calls
another method defined in the same
base class, may in fact end up calling
a method of a derived class that
overrides it
To translate:
bar (method of A, the base class)
calls self.foo, but may in fact end up
calling a method of the derived class
that overrides it (B.foo that
overrides A.foo)
Note, this won't work for private methods in Python 3.6:
class A:
def __foo(self): print 'A.foo'
def bar(self): self.__foo()
class B(A):
def __foo(self): print 'B.foo'
if __name__ == '__main__':
a = A()
a.bar() # echoes A.foo
b = B()
b.bar() # echoes A.foo, not B.foo
I spent an hour to find out the reason for this
Suppose I want to create an abstract class in Python with some methods to be implemented by subclasses, for example:
class Base():
def f(self):
print "Hello."
self.g()
print "Bye!"
class A(Base):
def g(self):
print "I am A"
class B(Base):
def g(self):
print "I am B"
I'd like that if the base class is instantiated and its f() method called, when self.g() is called, that throws an exception telling you that a subclass should have implemented method g().
What's the usual thing to do here? Should I raise a NotImplementedError? or is there a more specific way of doing it?
In Python 2.6 and better, you can use the abc module to make Base an "actually" abstract base class:
import abc
class Base:
__metaclass__ = abc.ABCMeta
#abc.abstractmethod
def g(self):
pass
def f(self): # &c
this guarantees that Base cannot be instantiated -- and neither can any subclass which fails to override g -- while meeting #Aaron's target of allowing subclasses to use super in their g implementations. Overall, a much better solution than what we used to have in Python 2.5 and earlier!
Side note: having Base inherit from object would be redundant, because the metaclass needs to be set explicitly anyway.
Make a method that does nothing, but still has a docstring explaining the interface. Getting a NameError is confusing, and raising NotImplementedError (or any other exception, for that matter) will break proper usage of super.
Peter Norvig has given a solution for this in his Python Infrequently Asked Questions list. I'll reproduce it here. Do check out the IAQ, it is very useful.
## Python
class MyAbstractClass:
def method1(self): abstract
class MyClass(MyAbstractClass):
pass
def abstract():
import inspect
caller = inspect.getouterframes(inspect.currentframe())[1][3]
raise NotImplementedError(caller + ' must be implemented in subclass')
I have a series of Python classes in a file. Some classes reference others.
My code is something like this:
class A():
pass
class B():
c = C()
class C():
pass
Trying to run that, I get NameError: name 'C' is not defined. Fair enough, but is there any way to make it work, or do I have to manually re-order my classes to accommodate? In C++, I can create a class prototype. Does Python have an equivalent?
(I'm actually playing with Django models, but I tried not complicate matters).
Actually, all of the above are great observations about Python, but none of them will solve your problem.
Django needs to introspect stuff.
The right way to do what you want is the following:
class Car(models.Model):
manufacturer = models.ForeignKey('Manufacturer')
# ...
class Manufacturer(models.Model):
# ...
Note the use of the class name as a string rather than the literal class reference. Django offers this alternative to deal with exactly the problem that Python doesn't provide forward declarations.
This question reminds me of the classic support question that you should always ask any customer with an issue: "What are you really trying to do?"
In Python you don't create a prototype per se, but you do need to understand the difference between "class attributes" and instance-level attributes. In the example you've shown above, you are declaring a class attribute on class B, not an instance-level attribute.
This is what you are looking for:
class B():
def __init__(self):
self.c = C()
This would solve your problem as presented (but I think you are really looking for an instance attribute as jholloway7 responded):
class A:
pass
class B:
pass
class C:
pass
B.c = C()
Python doesn't have prototypes or Ruby-style open classes. But if you really need them, you can write a metaclass that overloads new so that it does a lookup in the current namespace to see if the class already exists, and if it does returns the existing type object rather than creating a new one. I did something like this on a ORM I write a while back and it's worked very well.
A decade after the question is asked, I have encountered the same problem. While people suggest that the referencing should be done inside the init method, there are times when you need to access the data as a "class attribute" before the class is actually instantiated. For that reason, I have come up with a simple solution using a descriptor.
class A():
pass
class B():
class D(object):
def __init__(self):
self.c = None
def __get__(self, instance, owner):
if not self.c:
self.c = C()
return self.c
c = D()
class C():
pass
>>> B.c
>>> <__main__.C object at 0x10cc385f8>
All correct answers about class vs instance attributes. However, the reason you have an error is just the order of defining your classes. Of course class C has not yet been defined (as class-level code is executed immediately on import):
class A():
pass
class C():
pass
class B():
c = C()
Will work.