Abstract class with Python 2.5 - python

I currently refactor a class defining a client or a server. The class had a lot of
if client:
XXXXXXXXXXXX
elif server:
YYYYYYYYYYYY
So I decided to create a class A with the similar code and one class C for the client and an other one S for the server which inherit A. (they don't have theses names of course ^^)
So class A is some kind of abstract class. But the problem is there is no abstract classes in Python 2.5, it comes with 2.6 version. So I was wondering if there is a way to forbid instantiations of class A.
One solution would have been to raise a NotImplemented error in the constructor of the class A, but C and S have the same code for it so I put it in the "abstract" class A (bad idea ?).
This may seem stupid but I develop in Python only from time to time and I'm a young programmer.
What are your advices?

In statically-typed languages, you use an abstract base class (ABC) because you need some object with a defined size, interface etc. to pass around. Otherwise, the code trying to call methods on that object can't be compiled.
Python isn't a statically-typed language, and the calling code doesn't need to know the type of the object it's calling at all. So, you can "define" your ABC just by documenting the interface requirements, and implementing that interface directly in two unrelated classes.
Eg,
class Server:
def do_thing(self):
pass #do that server thing here
class Client:
def do_thing(self):
pass #do that client thing here
def do_thing(thingy):
thingy.do_thing() # is it a Client? a Server? something else?
s=Server()
c=Client()
do_thing(s)
do_thing(c)
Here, you could pass in any object with a do_thing method whose arguments match the call.

This approach has the advantage that you do not need to do anything to the subclass to make it non-abstract.
class ABC(object):
abstract = True
def __new__(cls, *args, **kwargs):
if "abstract" in cls.__dict__ and cls.__dict__["abstract"] == True:
raise RuntimeError(cls.__name__ + " is abstract!")
return object.__new__(cls)
class Subclass(ABC):
pass
print Subclass()
print ABC()
Output:
<__main__.Subclass object at 0xb7878a6c>
Traceback (most recent call last):
File "abc.py", line 14, in <module>
print ABC()
File "abc.py", line 6, in __new__
raise RuntimeError(cls.__name__ + " is abstract!")
RuntimeError: ABC is abstract!
If you want to create an abstract subclass, simply do like this:
class AbstractSubclass(ABC):
abstract = True

You can call a method "foo" at the beginning of A constructor. In A, this method raises an exception. In C and in S, you redefine "foo" so there is no more exceptions.

My first question is: why can't you simply avoid to instantiate an object from class A? What I mean is that this is a bit like questions on implementing singletons... As this answerer correctly quoted:
Before the Gang of Four got all academic on us, "singleton" (without the formal name) was just a simple idea that deserved a simple line of code, not a whole religion.
The same - IMO - applies to abstract classes (which in fact have been introduced in Python for other reasons than the one you would intend to use them for.
That said, you could raise an exception in the __init__ method of class A. Something like:
>>> class A():
... def __init__(self):
... raise BaseException()
...
>>> a = A()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in __init__
BaseException
>>> class B(A):
... def __init__(self):
... pass
...
>>> b = B()
>>>
Of course this is just an rough idea: if you have - for example - some useful stuff in the A.__init__ you should check the __class__ attribute, etc...

The real question is: why do you need an abstract class?
if you make 2 classes, and make the second herit from the first, it is an efficient way to clean your code.

There's an abstract base class module for what you want.
Not applicable for 2.5.

Related

Why is that __init__ function of python doesn't have a return statement even though its a function

This may be a silly question but i am curious to know the answer.
As per official documentation, __init__ doesn't need return statement. Any particular reason why is it that way.
>>> class Complex:
... def __init__(self, realpart, imagpart):
... self.r = realpart
... self.i = imagpart
...
>>> x = Complex(3.0, -4.5)
>>> x.r, x.i
(3.0, -4.5)
__init__() is not a normal function. It is a special method Python uses to customize an instance of a class. It is part of Python's data model:
Called after the instance has been created (by __new__()), but before it is returned to the caller[...].
As you can see from above, when you create a new instance of a class, Python first calls __new_() - which is also a special method - to create a new instance of the class. Then __init__() is called to customize the new instance.
It wouldn't make sense to return anything from __init__(), since the class instance is already created. In fact, Python goes as far as raising an error to prevent this:
>>> class A:
... def __init__(self):
... return 'foo'
...
>>> A()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __init__() should return None, not 'str'
>>>
If you want to know what exactly is going on behind the scenes, #eryksun provides a nice explanation:
To completely explain this story, you have to step back to the metaclass __call__ method. In particular the default type.__call__ in CPython calls __new__ and __init__ via their C slot functions, and it's slot_tp_init (defined in Objects/typeobject.c) that enforces the return value to be None. If you use a custom metaclass that overrides type.__call__, it can manually call the __new__ and __init__ methods of the class with no restriction on what __init__ can return -- as silly as that would be.
__init__ is called when you create a new instance of a class.
It's main use is initializing the instance variables, and it can be called only with an instance - so you can't call it before you create an instance anyways (what triggers it automatically).
For these reasons, __init__s have no reason to be able to return any value - it's simply not their use case.

Proper way to wrap a constructor with a class decorator, so that it also works for immutable types, in python 3

I'm trying to define a class decorator which (among other things) wraps the constructor with some custom code.
I'm using a class decorator, and not inheritance, because I want the decorator to be applicable to multiple classes in a class hierarchy, and I want the wrapped code to execute for every decorated class in the hierarchy.
I can do something like:
def decorate(klass):
def Dec(klass):
def __init__(self,*a,**k):
# wrapping code here
super().__init__(*a,**k)
# wrapping code here
return Dec
And it works fine for simple test cases. But, i'm afraid replacing the class by another might cause subtle breakage (for instance if a decorated class decides to do arcane stuff referencing itself). In addition, it breaks the nice default repr string of the class (it shows up as "decorate..Dec" instead of whatever klass was originally).
I tried changing the class itself:
def decorate(klass):
old_init = klass.__init__
def new_init(self,*a,**k):
# wrapper code here
old_init(self,*a,**k)
# wrapper code here
klass.__init__ = new_init
return klass
This way, it maintains the proper class name and all, and it works fine as long as my constructor doesn't take any arguments. However, it breaks when applying it to, for instance, a type like str:
#decorate
class S(str):
pass
>>> s = S('foo')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "decorate.py", line 13, in new_init
old_init(self,*a,**k)
TypeError: object.__init__() takes no parameters
Identical code works perfectly if i'm not inheriting from str but from some dummy class.
If I override __new__ instead, it works for str but fails for a custom class:
def decorate(klass):
old_new = klass.__new__
def new_new(cls,*a,**k):
# wrapper code
i = old_init(cls,*a,**k)
# wrapper code
return i
klass.__new__ = new_new
return klass
#decorate
class Foo:
pass
#decorate
class Bar(Foo):
def __init__(self, x):
self.x = x
Then
>>> bar = Bar(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "decorate.py", line 12, in new_new
i = old_init(cls,*a,**k)
File "decorate.py", line 12, in new_new
i = old_init(cls,*a,**k)
TypeError: object() takes no parameters
How is it possible that version changing __init__ fails ? How is it possible that by replacing a function with another one of the most generic signature type (*a,**k) and proxy the call to the original, I get a failure ?
Why does object.init seems to violate the convention of accepting at least one positional argument ?
How can I make it so it works in both scenarios ? I can't override both __init__ and __new__ to extend the same behaviour; which criterion should be used to dynamically know which one is the right one to hook ?
Should this really be implemented completely differently, for instance with a metaclass ?

How do you call an instance of a class in Python?

This is inspired by a question I just saw, "Change what is returned by calling class instance", but was quickly answered with __repr__ (and accepted, so the questioner did not actually intend to call the instance).
Now calling an instance of a class can be done like this:
instance_of_object = object()
instance_of_object()
but we'll get an error, something like TypeError: 'object' object is not callable.
This behavior is defined in the CPython source here.
So to ensure we have this question on Stackoverflow:
How do you actually call an instance of a class in Python?
You call an instance of a class as in the following:
o = object() # create our instance
o() # call the instance
But this will typically give us an error.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'object' object is not callable
How can we call the instance as intended, and perhaps get something useful out of it?
We have to implement Python special method, __call__!
class Knight(object):
def __call__(self, foo, bar, baz=None):
print(foo)
print(bar)
print(bar)
print(bar)
print(baz)
Instantiate the class:
a_knight = Knight()
Now we can call the class instance:
a_knight('ni!', 'ichi', 'pitang-zoom-boing!')
which prints:
ni!
ichi
ichi
ichi
pitang-zoom-boing!
And we have now actually, and successfully, called an instance of the class!
The short answer is that the object class has no __call__ method (you can check that with "dir(object)"). When you create an instance of a class the __init__ method is called and when you call the instance, the __call__ method is called.
Up Votes for Everyone!
Thanks for posting the question and thanks for answering.
I thought I would just share my implementation in case that helps others ...
I have a class (called RTS) and it contains an SQL Query that I access using a 'get'. The class works fine as an independent endpoint. Now I want to call that class from within the program.
Using the answer above I added the following:
class RTS(Resource):
def __call__(self):
print("In RTS")
def get(self, user_id):
try: ...
In order to call the class from elsewhere in the program I added:
getGR = RTS.get(self, user_unique_id)
Voila - I got the same info I could check on Postman returned within the program.

Python: Multiple ways to initialize a class

I have a class A which can be 'initialized' in two different ways. So, I provide a 'factory-like' interface for it based on the second answer in this post.
class A(object):
#staticmethod
def from_method_1(<method_1_parameters>):
a = A()
# set parameters of 'a' using <method_1_parameters>
return a
#staticmethod
def from_method_2(<method_2_parameters>):
a = A()
# set parameters of 'a' using <method_2_parameters>
return a
The two methods are different enough that I can't just plug their parameters into the class's __init__. So, class A should be initialized using:
a = A.from_method_1(<method_1_parameters>)
or
a = A.from_method_2(<method_2_parameters>)
However, it is still possible to call the 'default init' for A:
a = A() # just an empty 'A' object
Is there any way to prevent this? I can't just raise NotImplementedError from __init__ because the two 'factory methods' use it too.
Or do I need to use a completely different approach altogether.
Has been a very long time since this question was asked but I think it's interesting enough to be revived.
When I first saw your problem the private constructor concept just popped out my mind. It's a concept important in other OOP languages, but as Python doesn't enforces privacy I didn't really thought about it since Python became my main language.
Therefore, I became curious and I found this "Private Constructor in Python" question. It covers pretty much all about this topic and I think the second answer can be helpful in here.
Basically it uses name mangling to declare a pseudo-private class attribute (there isn't such thing as private variables in Python) and assign the class object to it. Therefore you'll have an as-private-as-Python allows variable to use to check if your initialization was made from an class method or from an outside call. I made the following example based on this mechanism:
class A(object):
__obj = object()
def __init__(self, obj=None):
assert(obj == A.__obj), \
'A object must be created using A.from_method_1 or A.from_method_2'
#classmethod
def from_method_1(cls):
a = A(cls.__obj)
print('Created from method 1!')
return a
#classmethod
def from_method_2(cls):
a = A(cls.__obj)
print('Created from method 2!')
return a
Tests:
>>> A()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "t.py", line 6, in __init__
'A object must be created using A.from_method_1 or A.from_method_2'
AssertionError: A object must be created using A.from_method_1 or A.from_method_2
>>> A.from_method_1()
Created from method 1!
<t.A object at 0x7f3f7f2ca450>
>>> A.from_method_2()
Created from method 2!
<t.A object at 0x7f3f7f2ca350>
However, as this solution is a workaround with name mangling, it does have one flaw if you know how to look for it:
>>> A(A._A__obj)
<t.A object at 0x7f3f7f2ca450>

Python: deleting a class attribute in a subclass

I have a subclass and I want it to not include a class attribute that's present on the base class.
I tried this, but it doesn't work:
>>> class A(object):
... x = 5
>>> class B(A):
... del x
Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
class B(A):
File "<pyshell#1>", line 2, in B
del x
NameError: name 'x' is not defined
How can I do this?
You can use delattr(class, field_name) to remove it from the class definition.
You don't need to delete it. Just override it.
class B(A):
x = None
or simply don't reference it.
Or consider a different design (instance attribute?).
None of the answers had worked for me.
For example delattr(SubClass, "attrname") (or its exact equivalent, del SubClass.attrname) won't "hide" a parent method, because this is not how method resolution work. It would fail with AttributeError('attrname',) instead, as the subclass doesn't have attrname. And, of course, replacing attribute with None doesn't actually remove it.
Let's consider this base class:
class Spam(object):
# Also try with `expect = True` and with a `#property` decorator
def expect(self):
return "This is pretty much expected"
I know only two only ways to subclass it, hiding the expect attribute:
Using a descriptor class that raises AttributeError from __get__. On attribute lookup, there will be an exception, generally indistinguishable from a lookup failure.
The simplest way is just declaring a property that raises AttributeError. This is essentially what #JBernardo had suggested.
class SpanishInquisition(Spam):
#property
def expect(self):
raise AttributeError("Nobody expects the Spanish Inquisition!")
assert hasattr(Spam, "expect") == True
# assert hasattr(SpanishInquisition, "expect") == False # Fails!
assert hasattr(SpanishInquisition(), "expect") == False
However, this only works for instances, and not for the classes (the hasattr(SpanishInquisition, "expect") == True assertion would be broken).
If you want all the assertions above to hold true, use this:
class AttributeHider(object):
def __get__(self, instance, owner):
raise AttributeError("This is not the attribute you're looking for")
class SpanishInquisition(Spam):
expect = AttributeHider()
assert hasattr(Spam, "expect") == True
assert hasattr(SpanishInquisition, "expect") == False # Works!
assert hasattr(SpanishInquisition(), "expect") == False
I believe this is the most elegant method, as the code is clear, generic and compact. Of course, one should really think twice if removing the attribute is what they really want.
Overriding attribute lookup with __getattribute__ magic method. You can do this either in a subclass (or a mixin, like in the example below, as I wanted to write it just once), and that would hide attribute on the subclass instances. If you want to hide the method from the subclass as well, you need to use metaclasses.
class ExpectMethodHider(object):
def __getattribute__(self, name):
if name == "expect":
raise AttributeError("Nobody expects the Spanish Inquisition!")
return super().__getattribute__(name)
class ExpectMethodHidingMetaclass(ExpectMethodHider, type):
pass
# I've used Python 3.x here, thus the syntax.
# For Python 2.x use __metaclass__ = ExpectMethodHidingMetaclass
class SpanishInquisition(ExpectMethodHider, Spam,
metaclass=ExpectMethodHidingMetaclass):
pass
assert hasattr(Spam, "expect") == True
assert hasattr(SpanishInquisition, "expect") == False
assert hasattr(SpanishInquisition(), "expect") == False
This looks worse (more verbose and less generic) than the method above, but one may consider this approach as well.
Note, this does not work on special ("magic") methods (e.g. __len__), because those bypass __getproperty__. Check out Special Method Lookup section of the Python documentation for more details. If this is what you need to undo, just override it and call object's implementation, skipping the parent.
Needless to say, this only applies to the "new-style classes" (the ones that inherit from object), as magic methods and descriptor protocols aren't supported there. Hopefully, those are a thing of the past.
Maybe you could set x as property and raise AttributeError whenever someone try to access it.
>>> class C:
x = 5
>>> class D(C):
def foo(self):
raise AttributeError
x = property(foo)
>>> d = D()
>>> print(d.x)
File "<pyshell#17>", line 3, in foo
raise AttributeError
AttributeError
Think carefully about why you want to do this; you probably don't. Consider not making B inherit from A.
The idea of subclassing is to specialise an object. In particular, children of a class should be valid instances of the parent class:
>>> class foo(dict): pass
>>> isinstance(foo(), dict)
... True
If you implement this behaviour (with e.g. x = property(lambda: AttributeError)), you are breaking the subclassing concept, and this is Bad.
I'm had the same problem as well, and I thought I had a valid reason to delete the class attribute in the subclass: my superclass (call it A) had a read-only property that provided the value of the attribute, but in my subclass (call it B), the attribute was a read/write instance variable. I found that Python was calling the property function even though I thought the instance variable should have been overriding it. I could have made a separate getter function to be used to access the underlying property, but that seemed like an unnecessary and inelegant cluttering of the interface namespace (as if that really matters).
As it turns out, the answer was to create a new abstract superclass (call it S) with the original common attributes of A, and have A and B derive from S. Since Python has duck typing, it does not really matter that B does not extend A, I can still use them in the same places, since they implicitly implement the same interface.
Trying to do this is probably a bad idea, but...
It doesn't seem to be do this via "proper" inheritance because of how looking up B.x works by default. When getting B.x the x is first looked up in B and if it's not found there it's searched in A, but on the other hand when setting or deleting B.x only B will be searched. So for example
>>> class A:
>>> x = 5
>>> class B(A):
>>> pass
>>> B.x
5
>>> del B.x
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: class B has no attribute 'x'
>>> B.x = 6
>>> B.x
6
>>> del B.x
>>> B.x
5
Here we see that first we doesn't seem to be able to delete B.x since it doesn't exist (A.x exists and is what gets served when you evaluate B.x). However by setting B.x to 6 the B.x will exist, it can be retrieved by B.x and deleted by del B.x by which it ceases to exist so after that again A.x will be served as response to B.x.
What you could do on the other hand is to use metaclasses to make B.x raise AttributeError:
class NoX(type):
#property
def x(self):
raise AttributeError("We don't like X")
class A(object):
x = [42]
class B(A, metaclass=NoX):
pass
print(A.x)
print(B.x)
Now of course purists may yell that this breaks the LSP, but it's not that simple. It all boils down to if you consider that you've created a subtype by doing this. The issubclass and isinstance methods says yes, but LSP says no (and many programmers would assume "yes" since you inherit from A).
The LSP means that if B is a subtype of A then we could use B whenever we could use A, but since we can't do this while doing this construct we could conclude that B actually isn't a subtype of A and therefore LSP isn't violated.

Categories