call Python method if it exists - python

How to call a subclass method from a base class method, only if the subclass supports that method? And what's the best way to do that? Illustrative example, I have an animal that protects my house: if someone walks by it will look angry, and it will bark if it can.
Example code:
class Protector(object):
def protect(self):
self.lookangry()
if hasattr(self, 'bark'):
self.bark()
class GermanShepherd(Protector):
def lookangry(self):
print u') _ _ __/°°¬'
def bark(self):
print 'wau wau'
class ScaryCat(Protector):
def lookangry(self):
print '=^..^='
I can think of lots of alternative implementations for this:
Using hasattr as above.
try: self.bark() except AttributeError: pass but that also catches any AttributeErrors in bark
Same as 2 but inspect the error message to make sure it's the right AttributeError
Like 2 but define an abstract bark method that raises NotImplementedError in the abstract class and check for NotImplementedError instead of AttributeError. With this solution Pylint will complain that I forgot to override the abstract method in ScaryCat.
Define an empty bark method in the abstract class:
class Protector(object):
def protect(self):
self.lookangry()
self.bark()
def bark(self):
pass
I figured in Python their should usually be one way to do something. In this case it's not clear to me which. Which one of these options is most readable, least likely to introduce a bug when stuff is changed and most inline with coding standards, especially Pylint? Is there a better way to do it that I've missed?

It seems to me you're thinking about inheritance incorrectly. The base class is supposed to encapsulate everything that is shared across any of the subclasses. If something is not shared by all subclasses, by definition it is not part of the base class.
So your statement "if someone walks by it will look angry, and it will bark if it can" doesn't make sense to me. The "bark if it can" part is not shared across all subclasses, therefore it shouldn't be implemented in the base class.
What should happen is that the subclass that you want to bark adds this functionality to the protect() method. As in:
class Protector():
def protect(self):
self.lookangry()
class GermanShepherd(Protector):
def protect(self):
super().protect() # or super(GermanShepherd, self).protect() for Python 2
self.bark()
This way all subclasses will lookangry(), but the subclasses which implement a bark() method will have it as part of the extended functionality of the superclass's protect() method.

I think 6.) could be that the Protector class makes just the basic shared methods abstract thus required, while leaving the extra methods to its heirs. Of course this can be splitted into more sub-classes, see https://repl.it/repls/AridScrawnyCoderesource (Written in Python 3.6)
class Protector(object):
def lookangry(self):
raise NotImplementedError("If it can't look angry, it can't protect")
def protect(self):
self.lookangry()
class Doggo(Protector):
def bark(self):
raise NotImplementedError("If a dog can't bark, it can't protect")
def protect(self):
super().protect()
self.bark()
class GermanShepherd(Doggo):
def lookangry(self):
print(') _ _ __/°°¬')
def bark(self):
print('wau wau')
class Pug(Doggo):
# We will not consider that screeching as barking so no bark method
def lookangry(self):
print('(◉ω◉)')
class ScaryCat(Protector):
def lookangry(self):
print('o(≧o≦)o')
class Kitten(Protector):
pass
doggo = GermanShepherd()
doggo.protect()
try:
gleam_of_silver = Pug()
gleam_of_silver.protect()
except NotImplementedError as e:
print(e)
cheezburger = ScaryCat()
cheezburger.protect()
try:
ball_of_wool = Kitten()
ball_of_wool.protect()
except NotImplementedError as e:
print(e)

You missed one possibility:
Define a bark method that raises NotImplementedError, as in your option 4, but don't make it abstract.
This eliminates PyLint's complaint—and, more importantly, eliminates the legitimate problem it was complaining about.
As for your other options:
hasattr is unnecessary LBYL, which is usually not Pythonic.
The except problem can be handled by doing bark = self.bark inside a try block, then doing bark() if it passes. This is sometimes necessary, but the fact that it's a bit clumsy and hasn't been "fixed" should give you an idea of how often it's worth doing.
Inspecting error messages is an anti-pattern. Anything that's not a separate, documented argument value is subject to change across Python versions and implementations. (Plus, what if ManWithSidekick.bark() does self.sidekick.bark()? How would you distinguish the AttributeError there?)
So, that leaves 2, 4.5, and 5.
I think in most cases, either 4.5 or 5 will be the right thing to do. The difference between them is not pragmatic, but conceptual: If a ScaryCat an animal that barks silently, use option 5; if not, then barking must be an optional part of protection that not all protectors do, in which case use option 4.5.
For this toy example, I think I'd use option 4.5. And I think that will be the case with most toy examples you come up with.
However, I suspect that most real-life examples will be pretty different:
Most real-life examples won't need this deep hierarchy.
Of those that do, usually either bark will either be implemented by all subclasses, or won't be called by the superclass.
Of those that do need this, I think option 5 will usually fit. Sure, barking silently is not something a ScaryCat does, but parse_frame silently is something a ProxyProtocol does.
And there are so few exceptions left after that, that it's hard to speak about them abstractly and generally.

Related

Implementing abstract methods in Python 3

I am just trying to practice coding an abstract method in Python and i have the following code for it:
import abc
class test(abc.ABC):
#abc.abstractmethod
def first(self,name):
"""This is to be implemented"""
class Extendtest(test):
def __init__(self,name):
self.name = name
def first(self):
print ("Changing name!")
self.name = "Shaayan"
def second(self,value):
print ("Adding second argument!")
self.value = value
e = Extendtest("Subhayan")
print (e.name)
e.first()
print (e.name)
I intentionally changed the signature of the first method in the implementation of the abstract method.
But if i change the signature Python is not giving any error and is going through as expected.
Is there no way in Python by which i can force strict abstraction ?
This is not a new question asked about python and a short answer:
No, it is not possible. The easiest way is either to reuse a custom library like zope or implement this behavior by your own.
There was a proposal on ABC to check arguments and here is what Guido answers on this:
That is not a new idea. So far I have always rejected it because I
worry about both false positives and false negatives. Trying to
enforce that the method behaves as it should (or even its return
type) is hopeless; there can be a variety of reasons to modify the
argument list while still conforming to (the intent of) the interface.
I also worry that it will slow everything down.
That said, if you want to provide a standard mechanism that can
optionally be turned on to check argument conformance, e.g. by using a class or method decorator on the subclass, I would be fine with that
(as long as it runs purely at class-definition time; it shouldn't slow
down class instantiation or method calls). It will probably even find
some bugs. It will also surely have to be tuned to avoid certain
classes false positives.
reference to the thread

Class attributes in Python

Is there any difference in the following two pieces of code? If not, is one preferred over the other? Why would we be allowed to create class attributes dynamically?
Snippet 1
class Test(object):
def setClassAttribute(self):
Test.classAttribute = "Class Attribute"
Test().setClassAttribute()
Snippet 2
class Test(object):
classAttribute = "Class Attribute"
Test()
First, setting a class attribute on an instance method is a weird thing to do. And ignoring the self parameter and going right to Test is another weird thing to do, unless you specifically want all subclasses to share a single value.*
* If you did specifically want all subclasses to share a single value, I'd make it a #staticmethod with no params (and set it on Test). But in that case it isn't even really being used as a class attribute, and might work better as a module global, with a free function to set it.
So, even if you wanted to go with the first version, I'd write it like this:
class Test(object):
#classmethod
def setClassAttribute(cls):
cls.classAttribute = "Class Attribute"
Test.setClassAttribute()
However, all that being said, I think the second is far more pythonic. Here are the considerations:
In general, getters and setters are strongly discouraged in Python.
The first one leaves a gap during which the class exists but has no attribute.
Simple is better than complex.
The one thing to keep in mind is that part of the reason getters and setters are unnecessary in Python is that you can always replace an attribute with a #property if you later need it to be computed, validated, etc. With a class attribute, that's not quite as perfect a solution—but it's usually good enough.
One last thing: class attributes (and class methods, except for alternate constructor) are often a sign of a non-pythonic design at a higher level. Not always, of course, but often enough that it's worth explaining out loud why you think you need a class attribute and making sure it makes sense. (And if you've ever programmed in a language whose idioms make extensive use of class attributes—especially if it's Java—go find someone who's never used Java and try to explain it to him.)
It's more natural to do it like #2, but notice that they do different things. With #2, the class always has the attribute. With #1, it won't have the attribute until you call setClassAttribute.
You asked, "Why would we be allowed to create class attributes dynamically?" With Python, the question often is not "why would we be allowed to", but "why should we be prevented?" A class is an object like any other, it has attributes. Objects (generally) can get new attributes at any time. There's no reason to make a class be an exception to that rule.
I think #2 feels more natural. #1's implementation means that the attribute doesn't get set until an actual instance of the class gets created, which to me seems counterintuitive to what a class attribute (vs. object attribute) should be.

How to reproduce a Java interface behaviour in a pythonic way

Let us define a class called Filter. I want this class to be extended by subclasses and I want all these subclasses to override a method : job.
More specifically
class Filter(object):
def __init__(self, csvin=None):
self._input = csvin
# I want this to be abstract. All the classes that inherit from Filter should
# implement their own version of job.
def job(self):
pass
Said differently, I want to make sure that any subclass of Filter has a method called job.
I heard about the module abc, but I also read about these concepts called duck-typing and EAFP. My understanding is that, if I am a duck, I will just try to run
f = SomeFilter()
f.job()
and see if it works. This is my problem if it raises any exception, I should have been more careful when I wrote the class SomeFilter.
I am pretty sure I do not fully understand the meaning of duck-typing and EAFP, but if it means that I have to postpone debugging as late as possible (that is, at invokation time), then I disagree with this way of thinking. I do not understand why so many people seem to appreciate this EAFP philosophy, but I wish to be part of them.
Can someone convert me and explain how to achieve this in a safe and predictive manner, that is by preventing the programmer from making a mistake when extending Filter, in a pythonic way.
You can use raise NotImplementedError, as per the documentation:
class Filter(object):
def __init__(self, csvin=None):
self._input = csvin
# I want this to be abstract. All the classes that inherit from Filter should
# implement their own version of job.
def job(self):
raise NotImplementedError

What is wrong with this singleton?

Currently I'm using a rather simplistic implementation of a singleton. However, I've never seen anything like this suggested on the web, which leads me to believe there might be something wrong with it...
class Singleton:
def __init__():
raise ...
#staticmethod
def some():
pass
#staticmethod
def another():
pass
Are there any disadvantages to this implementation of a singleton (making all class members static). It is somewhat similar to using modules as singletons, except you wrap everything in a class.
Edit: I know about other ways to implement singletons in Python. What I don't like about them is that none of them are explicit (which goes against the Python zen):
Since I do a = Class() instead of something like a = Class.Instance(), it is not obvious that I'm dealing with on object with shared state (see note #1). If all members are static I at least have Class.someMethod() which kinda sorta suggests it's a singleton. What I don't like about this approach is that you can't use constructors and destructors, which removes the major advantage that singletons have over free functions, which is what you can do when they are created and destroyed (see note #2).
Note #1: I know I shouldn't care about the singleton's state (if I do, then it shouldn't be a singleton in the first place). I still want it to be explicit about what kind of class it is.
Note #2: When you create a singleton you could do some instantiation in its constructor. For example, in a singleton that deals with the graphics library, you could initialize the library in the constrctor. This way instantiation and deinstantiation happen automatically in the sinleton's constructors and destructor.
Or in a ResourceManager: the destructor could check if upon its destruction there still are resources in memory and act accordingly.
If you use free functions instead of singletons you have to do all of this by hand.
Yeah, it's a singleton, that's what's wrong with it. If you're going to make all methods static, don't bother making a class at all, just use free functions.
That's not a singleton. It's a stateless global collection of methods, wrapped in a type.
Nothing inherently wrong with it, unless you believe that it is a singleton.
Singleton implies one instance only.
It's impossible to mock, so it's hard to test.
That's what's wrong to it.
Actually having 1 object of a testable class is considered better practice.
We are borg!
This way state is shared across every instance of the object.
class Borg(object):
state = {}
def __init__(self):
self.__dict__ = self.state
a = Borg()
a.a = 34
b = Borg()
print b.a
c = Borg()
c.c = "haha"
print a.c
Just to quote the original
A singleton is a class which creates the same instance each time the constructor is called. Your class has no constructor, so it can not be a singleton.
The borg pattern is something different, here each instance shares the same attributes. Alex Martelli, the inventor of the borg pattern, gave a talk at the 2011 Europython where he told that Guido van Russom dislikes the borg pattern, but he told not why.
A real singleton in python has two advantages:
you can subclass from it
if you refactor your code and decide that you want different instances, the code change is minimized.

What is the correct way to extend a parent class method in modern Python

I frequently do this sort of thing:
class Person(object):
def greet(self):
print "Hello"
class Waiter(Person):
def greet(self):
Person.greet(self)
print "Would you like fries with that?"
The line Person.greet(self) doesn't seem right. If I ever change what class Waiter inherits from I'm going to have to track down every one of these and replace them all.
What is the correct way to do this is modern Python? Both 2.x and 3.x, I understand there were changes in this area in 3.
If it matters any I generally stick to single inheritance, but if extra stuff is required to accommodate multiple inheritance correctly it would be good to know about that.
You use super:
Return a proxy object that delegates
method calls to a parent or sibling
class of type. This is useful for
accessing inherited methods that have
been overridden in a class. The search
order is same as that used by
getattr() except that the type itself
is skipped.
In other words, a call to super returns a fake object which delegates attribute lookups to classes above you in the inheritance chain. Points to note:
This does not work with old-style classes -- so if you are using Python 2.x, you need to ensure that the top class in your hierarchy inherits from object.
You need to pass your own class and instance to super in Python 2.x. This requirement was waived in 3.x.
This will handle all multiple inheritance correctly. (When you have a multiple inheritance tree in Python, a method resolution order is generated and the lookups go through parent classes in this order.)
Take care: there are many places to get confused about multiple inheritance in Python. You might want to read super() Considered Harmful. If you are sure that you are going to stick to a single inheritance tree, and that you are not going to change the names of classes in said tree, you can hardcode the class names as you do above and everything will work fine.
Not sure if you're looking for this but you can call a parent without referring to it by doing this.
super(Waiter, self).greet()
This will call the greet() function in Person.
katrielalex's answer is really the answer to your question, but this wouldn't fit in a comment.
If you plan to go about using super everywhere, and you ever think in terms of multiple inheritance, definitely read the "super() Considered Harmful" link. super() is a great tool, but it takes understanding to use correctly. In my experience, for simple things that don't seem likely to get into complicated diamond inheritance tangles, it's actually easier and less tedious to just call the superclass directly and deal with the renames when you change the name of the base class.
In fact, in Python2 you have to include the current class name, which is usually more likely to change than the base class name. (And in fact sometimes it's very difficult to pass a reference to the current class if you're doing wacky things; at the point when the method is being defined the class isn't bound to any name, and at the point when the super call is executed the original name of the class may not still be bound to the class, such as when you're using a class decorator)
I'd like to make it more explicit in this answer with an example. It's just like how we do in JavaScript. The short answer is, do that like we initiate the constructor using super.
class Person(object):
def __init__(self, name):
self.name = name
def greet(self):
print(f"Hello, I'm {self.name}")
class Waiter(Person):
def __init__(self, name):
super().__init__(name)
# initiate the parent constructor
# or super(Waiter, self).__init__(name)
def greet(self):
super(Waiter, self).greet()
print("Would you like fries with that?")
waiter = Waiter("John")
waiter.greet()
# Hello, I'm John
# Would you like fries with that?

Categories