I know virtual methods from PHP or Java.
How can they be implemented in Python?
Or have I to define an empty method in an abstract class and override it?
Sure, and you don't even have to define a method in the base class. In Python methods are better than virtual - they're completely dynamic, as the typing in Python is duck typing.
class Dog:
def say(self):
print "hau"
class Cat:
def say(self):
print "meow"
pet = Dog()
pet.say() # prints "hau"
another_pet = Cat()
another_pet.say() # prints "meow"
my_pets = [pet, another_pet]
for a_pet in my_pets:
a_pet.say()
Cat and Dog in Python don't even have to derive from a common base class to allow this behavior - you gain it for free. That said, some programmers prefer to define their class hierarchies in a more rigid way to document it better and impose some strictness of typing. This is also possible - see for example the abc standard module.
raise NotImplementedError() (dynamic type checking)
This is the recommended exception to raise on "pure virtual methods" of "abstract" base classes that don't implement a method.
https://docs.python.org/3.5/library/exceptions.html#NotImplementedError says:
This exception is derived from RuntimeError. In user defined base classes, abstract methods should raise this exception when they require derived classes to override the method.
As others said, this is mostly a documentation convention and is not required, but this way you get a more meaningful exception than a missing attribute error.
dynamic.py
class Base(object):
def virtualMethod(self):
raise NotImplementedError()
def usesVirtualMethod(self):
return self.virtualMethod() + 1
class Derived(Base):
def virtualMethod(self):
return 1
print Derived().usesVirtualMethod()
Base().usesVirtualMethod()
gives:
2
Traceback (most recent call last):
File "./dynamic.py", line 13, in <module>
Base().usesVirtualMethod()
File "./dynamic.py", line 6, in usesVirtualMethod
return self.virtualMethod() + 1
File "./dynamic.py", line 4, in virtualMethod
raise NotImplementedError()
NotImplementedError
typing.Protocol (static type checking, Python 3.8)
Python 3.8 added typing.Protocol which now allows us to also statically type check that a virtual method is implemented on a subclass.
protocol.py
from typing import Protocol
class CanFly(Protocol):
def fly(self) -> str:
pass
def fly_fast(self) -> str:
return 'CanFly.fly_fast'
class Bird(CanFly):
def fly(self):
return 'Bird.fly'
def fly_fast(self):
return 'Bird.fly_fast'
class FakeBird(CanFly):
pass
assert Bird().fly() == 'Bird.fly'
assert Bird().fly_fast() == 'Bird.fly_fast'
# mypy error
assert FakeBird().fly() is None
# mypy error
assert FakeBird().fly_fast() == 'CanFly.fly_fast'
If we run this file, the asserts pass, as we didn't add any dynamic typechecking:
python protocol.py
but if we typecheck if mypy:
python -m pip install --user mypy
mypy protocol.py
we get an error as expected:
protocol.py:22: error: Cannot instantiate abstract class "FakeBird" with abstract attribute "fly"
protocol.py:24: error: Cannot instantiate abstract class "FakeBird" with abstract attribute "fly"
It is a bit unfortunate however that the error checking only picks up the error on instantiation, and not at class definition.
typing.Protocol counts methods as abstract when their body is "empty"
I'm not sure what they count as empty, but both all of the following count as empty:
pass
... ellipsis object
raise NotImplementedError()
So the best possibility is likely:
protocol_empty.py
from typing import Protocol
class CanFly(Protocol):
def fly(self) -> None:
raise NotImplementedError()
class Bird(CanFly):
def fly(self):
return None
class FakeBird(CanFly):
pass
Bird().fly()
FakeBird().fly()
which fails as desired:
protocol_empty.py:15: error: Cannot instantiate abstract class "FakeBird" with abstract attribute "fly"
protocol_empty.py:15: note: The following method was marked implicitly abstract because it has an empty function body: "fly". If it is not meant to be abstract, explicitly return None.
but if e.g. we replace the:
raise NotImplementedError()
with some random "non-empty" statement such as:
x = 1
then mypy does not count them as virtual and gives no errors.
#abc.abstractmethod: metaclass syntax changed in Python 3
In Python 3 metaclasses are declared as:
class C(metaclass=abc.ABCMeta):
instead of the Python 2:
class C:
__metaclass__=abc.ABCMeta
so now to use #abc.abstractmethod which was previously mentioned at https://stackoverflow.com/a/19316077/895245 you need:
abc_cheat.py
class C(metaclass=abc.ABCMeta):
#abc.abstractmethod
def m(self, i):
pass
try:
c = C()
except TypeError:
pass
else:
assert False
But TODO: what is the advantage of ABCMeta over just raise NotImplementedError? It has a disadvantage that you are forced to define a metaclass, so more work, but I don't see an advantage. https://peps.python.org/pep-0544 does mention both approaches in passing.
Outro
Bibiography:
https://peps.python.org/pep-0544 the typing.Protocol PEP
Is it possible to make abstract classes?
What to use in replacement of an interface/protocol in python
Tested on Python 3.10.7, mypy 0.982, Ubuntu 21.10.
Python methods are always virtual.
Actually, in version 2.6 python provides something called abstract base classes and you can explicitly set virtual methods like this:
from abc import ABCMeta
from abc import abstractmethod
...
class C:
__metaclass__ = ABCMeta
#abstractmethod
def my_abstract_method(self, ...):
It works very well, provided the class does not inherit from classes that already use metaclasses.
source: http://docs.python.org/2/library/abc.html
Python methods are always virtual
like Ignacio said yet
Somehow class inheritance may be a better approach to implement what you want.
class Animal:
def __init__(self,name,legs):
self.name = name
self.legs = legs
def getLegs(self):
return "{0} has {1} legs".format(self.name, self.legs)
def says(self):
return "I am an unknown animal"
class Dog(Animal): # <Dog inherits from Animal here (all methods as well)
def says(self): # <Called instead of Animal says method
return "I am a dog named {0}".format(self.name)
def somethingOnlyADogCanDo(self):
return "be loyal"
formless = Animal("Animal", 0)
rover = Dog("Rover", 4) #<calls initialization method from animal
print(formless.says()) # <calls animal say method
print(rover.says()) #<calls Dog says method
print(rover.getLegs()) #<calls getLegs method from animal class
Results should be:
I am an unknown animal
I am a dog named Rover
Rover has 4 legs
Something like a virtual method in C++ (calling method implementation of a derived class through a reference or pointer to the base class) doesn't make sense in Python, as Python doesn't have typing. (I don't know how virtual methods work in Java and PHP though.)
But if by "virtual" you mean calling the bottom-most implementation in the inheritance hierarchy, then that's what you always get in Python, as several answers point out.
Well, almost always...
As dplamp pointed out, not all methods in Python behave like that. Dunder method don't. And I think that's a not so well known feature.
Consider this artificial example
class A:
def prop_a(self):
return 1
def prop_b(self):
return 10 * self.prop_a()
class B(A):
def prop_a(self):
return 2
Now
>>> B().prop_b()
20
>>> A().prob_b()
10
However, consider this one
class A:
def __prop_a(self):
return 1
def prop_b(self):
return 10 * self.__prop_a()
class B(A):
def __prop_a(self):
return 2
Now
>>> B().prop_b()
10
>>> A().prob_b()
10
The only thing we've changes was making prop_a() a dunder method.
A problem with the first behavior can be that you can't change the behavior of prop_a() in the derived class without impacting the behavior of prop_b(). This very nice talk by Raymond Hettinger gives an example for a use case where this is inconvenient.
Python 3.6 introduced __init_subclass__ and this let you simply do this:
class A:
def method(self):
'''method needs to be overwritten'''
return NotImplemented
def __init_subclass__(cls):
if cls.method is A.method:
raise NotImplementedError(
'Subclass has not overwritten method {method}!')
The benefit of this solution is that you avoid the abc metaclass and give the user a direct imperative how to do it right. In addition to another answer here that raises NotImplementedError when calling the method. This solution is checked on runtime and not only IF the user calls the method.
Related
public interface IInterface
{
void show();
}
public class MyClass : IInterface
{
#region IInterface Members
public void show()
{
Console.WriteLine("Hello World!");
}
#endregion
}
How do I implement Python equivalent of this C# code ?
class IInterface(object):
def __init__(self):
pass
def show(self):
raise Exception("NotImplementedException")
class MyClass(IInterface):
def __init__(self):
IInterface.__init__(self)
def show(self):
print 'Hello World!'
Is this a good idea?? Please give examples in your answers.
As mentioned by other here:
Interfaces are not necessary in Python. This is because Python has proper multiple inheritance, and also ducktyping, which means that the places where you must have interfaces in Java, you don't have to have them in Python.
That said, there are still several uses for interfaces. Some of them are covered by Pythons Abstract Base Classes, introduced in Python 2.6. They are useful, if you want to make base classes that cannot be instantiated, but provide a specific interface or part of an implementation.
Another usage is if you somehow want to specify that an object implements a specific interface, and you can use ABC's for that too by subclassing from them. Another way is zope.interface, a module that is a part of the Zope Component Architecture, a really awesomely cool component framework. Here you don't subclass from the interfaces, but instead mark classes (or even instances) as implementing an interface. This can also be used to look up components from a component registry. Supercool!
Implementing interfaces with abstract base classes is much simpler in modern Python 3 and they serve a purpose as an interface contract for plug-in extensions.
Create the interface/abstract base class:
from abc import ABC, abstractmethod
class AccountingSystem(ABC):
#abstractmethod
def create_purchase_invoice(self, purchase):
pass
#abstractmethod
def create_sale_invoice(self, sale):
log.debug('Creating sale invoice', sale)
Create a normal subclass and override all abstract methods:
class GizmoAccountingSystem(AccountingSystem):
def create_purchase_invoice(self, purchase):
submit_to_gizmo_purchase_service(purchase)
def create_sale_invoice(self, sale):
super().create_sale_invoice(sale)
submit_to_gizmo_sale_service(sale)
You can optionally have common implementation in the abstract methods as in create_sale_invoice(), calling it with super() explicitly in the subclass as above.
Instantiation of a subclass that does not implement all the abstract methods fails:
class IncompleteAccountingSystem(AccountingSystem):
pass
>>> accounting = IncompleteAccountingSystem()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Can't instantiate abstract class IncompleteAccountingSystem with abstract methods
create_purchase_invoice, create_sale_invoice
You can also have abstract properties, static and class methods by combining corresponding annotations with #abstractmethod.
Abstract base classes are great for implementing plugin-based systems. All imported subclasses of a class are accessible via __subclasses__(), so if you load all classes from a plugin directory with importlib.import_module() and if they subclass the base class, you have direct access to them via __subclasses__() and you can be sure that the interface contract is enforced for all of them during instantiation.
Here's the plugin loading implementation for the AccountingSystem example above:
...
from importlib import import_module
class AccountingSystem(ABC):
...
_instance = None
#classmethod
def instance(cls):
if not cls._instance:
module_name = settings.ACCOUNTING_SYSTEM_MODULE_NAME
import_module(module_name)
subclasses = cls.__subclasses__()
if len(subclasses) > 1:
raise InvalidAccountingSystemError('More than one '
f'accounting module: {subclasses}')
if not subclasses or module_name not in str(subclasses[0]):
raise InvalidAccountingSystemError('Accounting module '
f'{module_name} does not exist or does not '
'subclass AccountingSystem')
cls._instance = subclasses[0]()
return cls._instance
Then you can access the accounting system plugin object through the AccountingSystem class:
>>> accountingsystem = AccountingSystem.instance()
(Inspired by this PyMOTW-3 post.)
Using the abc module for abstract base classes seems to do the trick.
from abc import ABCMeta, abstractmethod
class IInterface:
__metaclass__ = ABCMeta
#classmethod
def version(self): return "1.0"
#abstractmethod
def show(self): raise NotImplementedError
class MyServer(IInterface):
def show(self):
print 'Hello, World 2!'
class MyBadServer(object):
def show(self):
print 'Damn you, world!'
class MyClient(object):
def __init__(self, server):
if not isinstance(server, IInterface): raise Exception('Bad interface')
if not IInterface.version() == '1.0': raise Exception('Bad revision')
self._server = server
def client_show(self):
self._server.show()
# This call will fail with an exception
try:
x = MyClient(MyBadServer)
except Exception as exc:
print 'Failed as it should!'
# This will pass with glory
MyClient(MyServer()).client_show()
I invite you to explore what Python 3.8 has to offer for the subject matter in form of Structural subtyping (static duck typing) (PEP 544)
See the short description https://docs.python.org/3/library/typing.html#typing.Protocol
For the simple example here it goes like this:
from typing import Protocol
class MyShowProto(Protocol):
def show(self):
...
class MyClass:
def show(self):
print('Hello World!')
class MyOtherClass:
pass
def foo(o: MyShowProto):
return o.show()
foo(MyClass()) # ok
foo(MyOtherClass()) # fails
foo(MyOtherClass()) will fail static type checks:
$ mypy proto-experiment.py
proto-experiment.py:21: error: Argument 1 to "foo" has incompatible type "MyOtherClass"; expected "MyShowProto"
Found 1 error in 1 file (checked 1 source file)
In addition, you can specify the base class explicitly, for instance:
class MyOtherClass(MyShowProto):
but note that this makes methods of the base class actually available on the subclass, and thus the static checker will not report that a method definition is missing on the MyOtherClass.
So in this case, in order to get a useful type-checking, all the methods that we want to be explicitly implemented should be decorated with #abstractmethod:
from typing import Protocol
from abc import abstractmethod
class MyShowProto(Protocol):
#abstractmethod
def show(self): raise NotImplementedError
class MyOtherClass(MyShowProto):
pass
MyOtherClass() # error in type checker
interface supports Python 2.7 and Python 3.4+.
To install interface you have to
pip install python-interface
Example Code:
from interface import implements, Interface
class MyInterface(Interface):
def method1(self, x):
pass
def method2(self, x, y):
pass
class MyClass(implements(MyInterface)):
def method1(self, x):
return x * 2
def method2(self, x, y):
return x + y
There are third-party implementations of interfaces for Python (most popular is Zope's, also used in Twisted), but more commonly Python coders prefer to use the richer concept known as an "Abstract Base Class" (ABC), which combines an interface with the possibility of having some implementation aspects there too. ABCs are particularly well supported in Python 2.6 and later, see the PEP, but even in earlier versions of Python they're normally seen as "the way to go" -- just define a class some of whose methods raise NotImplementedError so that subclasses will be on notice that they'd better override those methods!-)
Something like this (might not work as I don't have Python around):
class IInterface:
def show(self): raise NotImplementedError
class MyClass(IInterface):
def show(self): print "Hello World!"
My understanding is that interfaces are not that necessary in dynamic languages like Python. In Java (or C++ with its abstract base class) interfaces are means for ensuring that e.g. you're passing the right parameter, able to perform set of tasks.
E.g. if you have observer and observable, observable is interested in subscribing objects that supports IObserver interface, which in turn has notify action. This is checked at compile time.
In Python, there is no such thing as compile time and method lookups are performed at runtime. Moreover, one can override lookup with __getattr__() or __getattribute__() magic methods. In other words, you can pass, as observer, any object that can return callable on accessing notify attribute.
This leads me to the conclusion, that interfaces in Python do exist - it's just their enforcement is postponed to the moment in which they are actually used
Suppose I define an abstract base class like this:
from abc import abstractmethod, ABCMeta
class Quacker(object):
__metaclass__ = ABCMeta
#abstractmethod
def quack(self):
return "Quack!"
This ensures any class deriving from Quacker must implement the quack method. But if I define the following:
class PoliteDuck(Quacker):
def quack(self, name):
return "Quack quack %s!" % name
d = PoliteDuck() # no error
I'm allowed to instantiate the class because I've provided the quack method, but the function signatures don't match. I can see how this might be useful in some situations, but I'm in interested in ensuring I can definitely call the abstract methods. This might fail if the function signature is different!
So: how can I enforce a matching function signature? I would expect an error when creating the object if the signatures don't match, just like if I hadn't defined it at all.
I know that this is not idiomatic, and that Python is the wrong language to be using if I want these sorts of guarantees, but that's beside the point - is it possible?
It's worse than you think. Abstract methods are tracked by name only, so you don't even have to make quack a method in order to instantiate the child class.
class SurrealDuck(Quacker):
quack = 3
d = SurrealDuck()
print d.quack # Shows 3
There is nothing in the system that enforces that quack is even a callable object, let alone one whose arguments match the abstract method's original. At best, you could subclass ABCMeta and add code yourself to compare type signatures in the child to the originals in the parent, but this would be nontrivial to implement.
(Currently, marking something as "abstract" essentially just adds the name to a frozen set attribute in the parent (Quacker.__abstractmethods__). Making a class instantiable is as simple as setting this attribute to an empty iterable, which is useful for testing.)
I recommend you look at pylint. I ran this code through it's static analysis, and on the line where you defined the quack() method, it reported:
Argument number differs from overridden method (arguments-differ)
(https://en.wikipedia.org/wiki/Pylint)
I don't think this has changed in the base python language, but I did find one workaround that might be useful. The mypy package does seem to enforce signature conformity on abstract base classes and their concrete implementation. So basically if you define a signature on the abstract base class, all concrete classes have to follow the same exact signature.
Here is an example that will break in mypy. The code is taken from the mypy website, but I adapted it for this answer.
The first example is code that will pass. Note that the signatures for the eat method are the same and mypy does not complain.
from abc import ABCMeta, abstractmethod
class Animal(metaclass=ABCMeta):
#abstractmethod
def eat(self, food: str) -> None: pass
#property
#abstractmethod
def can_walk(self) -> bool: pass
class Cat(Animal):
def eat(self, food: str) -> None:
pass # Body omitted
#property
def can_walk(self) -> bool:
return True
y = Cat() # OK
But let's adapt this code a bit and now mypy throws an error:
from abc import ABCMeta, abstractmethod
class Animal(metaclass=ABCMeta):
#abstractmethod
def eat(self, food: str) -> None: pass
#property
#abstractmethod
def can_walk(self) -> bool: pass
class Cat(Animal):
def eat(self, food: str, drink: str) -> None:
pass # Body omitted
#property
def can_walk(self) -> bool:
return True
y = Cat() # Error
Mypy is still very much a work-in-progress but it does work in this case. There are some corner cases where some signature variants might not get caught, but otherwise seems to work for most practical applications.
public interface IInterface
{
void show();
}
public class MyClass : IInterface
{
#region IInterface Members
public void show()
{
Console.WriteLine("Hello World!");
}
#endregion
}
How do I implement Python equivalent of this C# code ?
class IInterface(object):
def __init__(self):
pass
def show(self):
raise Exception("NotImplementedException")
class MyClass(IInterface):
def __init__(self):
IInterface.__init__(self)
def show(self):
print 'Hello World!'
Is this a good idea?? Please give examples in your answers.
As mentioned by other here:
Interfaces are not necessary in Python. This is because Python has proper multiple inheritance, and also ducktyping, which means that the places where you must have interfaces in Java, you don't have to have them in Python.
That said, there are still several uses for interfaces. Some of them are covered by Pythons Abstract Base Classes, introduced in Python 2.6. They are useful, if you want to make base classes that cannot be instantiated, but provide a specific interface or part of an implementation.
Another usage is if you somehow want to specify that an object implements a specific interface, and you can use ABC's for that too by subclassing from them. Another way is zope.interface, a module that is a part of the Zope Component Architecture, a really awesomely cool component framework. Here you don't subclass from the interfaces, but instead mark classes (or even instances) as implementing an interface. This can also be used to look up components from a component registry. Supercool!
Implementing interfaces with abstract base classes is much simpler in modern Python 3 and they serve a purpose as an interface contract for plug-in extensions.
Create the interface/abstract base class:
from abc import ABC, abstractmethod
class AccountingSystem(ABC):
#abstractmethod
def create_purchase_invoice(self, purchase):
pass
#abstractmethod
def create_sale_invoice(self, sale):
log.debug('Creating sale invoice', sale)
Create a normal subclass and override all abstract methods:
class GizmoAccountingSystem(AccountingSystem):
def create_purchase_invoice(self, purchase):
submit_to_gizmo_purchase_service(purchase)
def create_sale_invoice(self, sale):
super().create_sale_invoice(sale)
submit_to_gizmo_sale_service(sale)
You can optionally have common implementation in the abstract methods as in create_sale_invoice(), calling it with super() explicitly in the subclass as above.
Instantiation of a subclass that does not implement all the abstract methods fails:
class IncompleteAccountingSystem(AccountingSystem):
pass
>>> accounting = IncompleteAccountingSystem()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Can't instantiate abstract class IncompleteAccountingSystem with abstract methods
create_purchase_invoice, create_sale_invoice
You can also have abstract properties, static and class methods by combining corresponding annotations with #abstractmethod.
Abstract base classes are great for implementing plugin-based systems. All imported subclasses of a class are accessible via __subclasses__(), so if you load all classes from a plugin directory with importlib.import_module() and if they subclass the base class, you have direct access to them via __subclasses__() and you can be sure that the interface contract is enforced for all of them during instantiation.
Here's the plugin loading implementation for the AccountingSystem example above:
...
from importlib import import_module
class AccountingSystem(ABC):
...
_instance = None
#classmethod
def instance(cls):
if not cls._instance:
module_name = settings.ACCOUNTING_SYSTEM_MODULE_NAME
import_module(module_name)
subclasses = cls.__subclasses__()
if len(subclasses) > 1:
raise InvalidAccountingSystemError('More than one '
f'accounting module: {subclasses}')
if not subclasses or module_name not in str(subclasses[0]):
raise InvalidAccountingSystemError('Accounting module '
f'{module_name} does not exist or does not '
'subclass AccountingSystem')
cls._instance = subclasses[0]()
return cls._instance
Then you can access the accounting system plugin object through the AccountingSystem class:
>>> accountingsystem = AccountingSystem.instance()
(Inspired by this PyMOTW-3 post.)
Using the abc module for abstract base classes seems to do the trick.
from abc import ABCMeta, abstractmethod
class IInterface:
__metaclass__ = ABCMeta
#classmethod
def version(self): return "1.0"
#abstractmethod
def show(self): raise NotImplementedError
class MyServer(IInterface):
def show(self):
print 'Hello, World 2!'
class MyBadServer(object):
def show(self):
print 'Damn you, world!'
class MyClient(object):
def __init__(self, server):
if not isinstance(server, IInterface): raise Exception('Bad interface')
if not IInterface.version() == '1.0': raise Exception('Bad revision')
self._server = server
def client_show(self):
self._server.show()
# This call will fail with an exception
try:
x = MyClient(MyBadServer)
except Exception as exc:
print 'Failed as it should!'
# This will pass with glory
MyClient(MyServer()).client_show()
I invite you to explore what Python 3.8 has to offer for the subject matter in form of Structural subtyping (static duck typing) (PEP 544)
See the short description https://docs.python.org/3/library/typing.html#typing.Protocol
For the simple example here it goes like this:
from typing import Protocol
class MyShowProto(Protocol):
def show(self):
...
class MyClass:
def show(self):
print('Hello World!')
class MyOtherClass:
pass
def foo(o: MyShowProto):
return o.show()
foo(MyClass()) # ok
foo(MyOtherClass()) # fails
foo(MyOtherClass()) will fail static type checks:
$ mypy proto-experiment.py
proto-experiment.py:21: error: Argument 1 to "foo" has incompatible type "MyOtherClass"; expected "MyShowProto"
Found 1 error in 1 file (checked 1 source file)
In addition, you can specify the base class explicitly, for instance:
class MyOtherClass(MyShowProto):
but note that this makes methods of the base class actually available on the subclass, and thus the static checker will not report that a method definition is missing on the MyOtherClass.
So in this case, in order to get a useful type-checking, all the methods that we want to be explicitly implemented should be decorated with #abstractmethod:
from typing import Protocol
from abc import abstractmethod
class MyShowProto(Protocol):
#abstractmethod
def show(self): raise NotImplementedError
class MyOtherClass(MyShowProto):
pass
MyOtherClass() # error in type checker
interface supports Python 2.7 and Python 3.4+.
To install interface you have to
pip install python-interface
Example Code:
from interface import implements, Interface
class MyInterface(Interface):
def method1(self, x):
pass
def method2(self, x, y):
pass
class MyClass(implements(MyInterface)):
def method1(self, x):
return x * 2
def method2(self, x, y):
return x + y
There are third-party implementations of interfaces for Python (most popular is Zope's, also used in Twisted), but more commonly Python coders prefer to use the richer concept known as an "Abstract Base Class" (ABC), which combines an interface with the possibility of having some implementation aspects there too. ABCs are particularly well supported in Python 2.6 and later, see the PEP, but even in earlier versions of Python they're normally seen as "the way to go" -- just define a class some of whose methods raise NotImplementedError so that subclasses will be on notice that they'd better override those methods!-)
Something like this (might not work as I don't have Python around):
class IInterface:
def show(self): raise NotImplementedError
class MyClass(IInterface):
def show(self): print "Hello World!"
My understanding is that interfaces are not that necessary in dynamic languages like Python. In Java (or C++ with its abstract base class) interfaces are means for ensuring that e.g. you're passing the right parameter, able to perform set of tasks.
E.g. if you have observer and observable, observable is interested in subscribing objects that supports IObserver interface, which in turn has notify action. This is checked at compile time.
In Python, there is no such thing as compile time and method lookups are performed at runtime. Moreover, one can override lookup with __getattr__() or __getattribute__() magic methods. In other words, you can pass, as observer, any object that can return callable on accessing notify attribute.
This leads me to the conclusion, that interfaces in Python do exist - it's just their enforcement is postponed to the moment in which they are actually used
Suppose I want to create an abstract class in Python with some methods to be implemented by subclasses, for example:
class Base():
def f(self):
print "Hello."
self.g()
print "Bye!"
class A(Base):
def g(self):
print "I am A"
class B(Base):
def g(self):
print "I am B"
I'd like that if the base class is instantiated and its f() method called, when self.g() is called, that throws an exception telling you that a subclass should have implemented method g().
What's the usual thing to do here? Should I raise a NotImplementedError? or is there a more specific way of doing it?
In Python 2.6 and better, you can use the abc module to make Base an "actually" abstract base class:
import abc
class Base:
__metaclass__ = abc.ABCMeta
#abc.abstractmethod
def g(self):
pass
def f(self): # &c
this guarantees that Base cannot be instantiated -- and neither can any subclass which fails to override g -- while meeting #Aaron's target of allowing subclasses to use super in their g implementations. Overall, a much better solution than what we used to have in Python 2.5 and earlier!
Side note: having Base inherit from object would be redundant, because the metaclass needs to be set explicitly anyway.
Make a method that does nothing, but still has a docstring explaining the interface. Getting a NameError is confusing, and raising NotImplementedError (or any other exception, for that matter) will break proper usage of super.
Peter Norvig has given a solution for this in his Python Infrequently Asked Questions list. I'll reproduce it here. Do check out the IAQ, it is very useful.
## Python
class MyAbstractClass:
def method1(self): abstract
class MyClass(MyAbstractClass):
pass
def abstract():
import inspect
caller = inspect.getouterframes(inspect.currentframe())[1][3]
raise NotImplementedError(caller + ' must be implemented in subclass')
In Java, for example, the #Override annotation not only provides compile-time checking of an override but makes for excellent self-documenting code.
I'm just looking for documentation (although if it's an indicator to some checker like pylint, that's a bonus). I can add a comment or docstring somewhere, but what is the idiomatic way to indicate an override in Python?
Based on this and fwc:s answer I created a pip installable package https://github.com/mkorpela/overrides
From time to time I end up here looking at this question.
Mainly this happens after (again) seeing the same bug in our code base: Someone has forgotten some "interface" implementing class while renaming a method in the "interface"..
Well Python ain't Java but Python has power -- and explicit is better than implicit -- and there are real concrete cases in the real world where this thing would have helped me.
So here is a sketch of overrides decorator. This will check that the class given as a parameter has the same method (or something) name as the method being decorated.
If you can think of a better solution please post it here!
def overrides(interface_class):
def overrider(method):
assert(method.__name__ in dir(interface_class))
return method
return overrider
It works as follows:
class MySuperInterface(object):
def my_method(self):
print 'hello world!'
class ConcreteImplementer(MySuperInterface):
#overrides(MySuperInterface)
def my_method(self):
print 'hello kitty!'
and if you do a faulty version it will raise an assertion error during class loading:
class ConcreteFaultyImplementer(MySuperInterface):
#overrides(MySuperInterface)
def your_method(self):
print 'bye bye!'
>> AssertionError!!!!!!!
Here's an implementation that doesn't require specification of the interface_class name.
import inspect
import re
def overrides(method):
# actually can't do this because a method is really just a function while inside a class def'n
#assert(inspect.ismethod(method))
stack = inspect.stack()
base_classes = re.search(r'class.+\((.+)\)\s*\:', stack[2][4][0]).group(1)
# handle multiple inheritance
base_classes = [s.strip() for s in base_classes.split(',')]
if not base_classes:
raise ValueError('overrides decorator: unable to determine base class')
# stack[0]=overrides, stack[1]=inside class def'n, stack[2]=outside class def'n
derived_class_locals = stack[2][0].f_locals
# replace each class name in base_classes with the actual class type
for i, base_class in enumerate(base_classes):
if '.' not in base_class:
base_classes[i] = derived_class_locals[base_class]
else:
components = base_class.split('.')
# obj is either a module or a class
obj = derived_class_locals[components[0]]
for c in components[1:]:
assert(inspect.ismodule(obj) or inspect.isclass(obj))
obj = getattr(obj, c)
base_classes[i] = obj
assert( any( hasattr(cls, method.__name__) for cls in base_classes ) )
return method
If you want this for documentation purposes only, you can define your own override decorator:
def override(f):
return f
class MyClass (BaseClass):
#override
def method(self):
pass
This is really nothing but eye-candy, unless you create override(f) in such a way that is actually checks for an override.
But then, this is Python, why write it like it was Java?
Improvising on #mkorpela great answer, here is a version with
more precise checks, naming, and raised Error objects
def overrides(interface_class):
"""
Function override annotation.
Corollary to #abc.abstractmethod where the override is not of an
abstractmethod.
Modified from answer https://stackoverflow.com/a/8313042/471376
"""
def confirm_override(method):
if method.__name__ not in dir(interface_class):
raise NotImplementedError('function "%s" is an #override but that'
' function is not implemented in base'
' class %s'
% (method.__name__,
interface_class)
)
def func():
pass
attr = getattr(interface_class, method.__name__)
if type(attr) is not type(func):
raise NotImplementedError('function "%s" is an #override'
' but that is implemented as type %s'
' in base class %s, expected implemented'
' type %s'
% (method.__name__,
type(attr),
interface_class,
type(func))
)
return method
return confirm_override
Here is what it looks like in practice:
NotImplementedError "not implemented in base class"
class A(object):
# ERROR: `a` is not a implemented!
pass
class B(A):
#overrides(A)
def a(self):
pass
results in more descriptive NotImplementedError error
function "a" is an #override but that function is not implemented in base class <class '__main__.A'>
full stack
Traceback (most recent call last):
…
File "C:/Users/user1/project.py", line 135, in <module>
class B(A):
File "C:/Users/user1/project.py", line 136, in B
#overrides(A)
File "C:/Users/user1/project.py", line 110, in confirm_override
interface_class)
NotImplementedError: function "a" is an #override but that function is not implemented in base class <class '__main__.A'>
NotImplementedError "expected implemented type"
class A(object):
# ERROR: `a` is not a function!
a = ''
class B(A):
#overrides(A)
def a(self):
pass
results in more descriptive NotImplementedError error
function "a" is an #override but that is implemented as type <class 'str'> in base class <class '__main__.A'>, expected implemented type <class 'function'>
full stack
Traceback (most recent call last):
…
File "C:/Users/user1/project.py", line 135, in <module>
class B(A):
File "C:/Users/user1/project.py", line 136, in B
#overrides(A)
File "C:/Users/user1/project.py", line 125, in confirm_override
type(func))
NotImplementedError: function "a" is an #override but that is implemented as type <class 'str'> in base class <class '__main__.A'>, expected implemented type <class 'function'>
The great thing about #mkorpela answer is the check happens during some initialization phase. The check does not need to be "run". Referring to the prior examples, class B is never initialized (B()) yet the NotImplementedError will still raise. This means overrides errors are caught sooner.
Python ain't Java. There's of course no such thing really as compile-time checking.
I think a comment in the docstring is plenty. This allows any user of your method to type help(obj.method) and see that the method is an override.
You can also explicitly extend an interface with class Foo(Interface), which will allow users to type help(Interface.method) to get an idea about the functionality your method is intended to provide.
Like others have said unlike Java there is not #Overide tag however above you can create your own using decorators however I would suggest using the getattrib() global method instead of using the internal dict so you get something like the following:
def Override(superClass):
def method(func)
getattr(superClass,method.__name__)
return method
If you wanted to you could catch getattr() in your own try catch raise your own error but I think getattr method is better in this case.
Also this catches all items bound to a class including class methods and vairables
Based on #mkorpela's great answer, I've written a similar package (ipromise pypi github) that does many more checks:
Suppose A inherits from B and C, B inherits from C.
Module ipromise checks that:
If A.f overrides B.f, B.f must exist, and A must inherit from B. (This is the check from the overrides package).
You don't have the pattern A.f declares that it overrides B.f, which then declares that it overrides C.f. A should say that it overrides from C.f since B might decide to stop overriding this method, and that should not result in downstream updates.
You don't have the pattern A.f declares that it overrides C.f, but B.f does not declare its override.
You don't have the pattern A.f declares that it overrides C.f, but B.f declares that it overrides from some D.f.
It also has various features for marking and checking implementing an abstract method.
You can use protocols from PEP 544. With this method, the interface-implementation relation is declared only at the use site.
Assuming you already have an implementation (let's call it MyFoobar), you define an interface (a Protocol), which has the signatures of all the methods and fields of your implementation, let's call that IFoobar.
Then, at the use site, you declare the implementation instance binding to have the interface type e.g. myFoobar: IFoobar = MyFoobar(). Now, if you use a field/method that is missing in the interface, Mypy will complain at the use site (even if it would work at runtime!). If you failed to implement a method from the interface in the implementation, Mypy will also complain. Mypy won't complain if you implement something that doesn't exist in the interface. But that case is rare, since the interface definition is compact and easy to review. You wouldn't be able to actually use that code, since Mypy would complain.
Now, this won't cover cases where you have implementations both in the superclass and the implementing class, like some uses of ABC. But override is used in Java even with no implementation in the interface. This solution covers that case.
from typing import Protocol
class A(Protocol):
def b(self):
...
def d(self): # we forgot to implement this in C
...
class C:
def b(self):
return 0
bob: A = C()
Type checking results in:
test.py:13: error: Incompatible types in assignment (expression has type "C", variable has type "A")
test.py:13: note: 'C' is missing following 'A' protocol member:
test.py:13: note: d
Found 1 error in 1 file (checked 1 source file)
as python 3.6 and above, the functionality provided by #override can be easily implemented using the descriptor protocol of python, namingly the set_name dunder method:
class override:
def __init__(self, func):
self._func = func
update_wrapper(self, func)
def __get__(self, obj, obj_type):
if obj is None:
return self
return self._func
def __set_name__(self, obj_type, name):
self.validate_override(obj_type, name)
def validate_override(self, obj_type, name):
for parent in obj_type.__bases__:
func = parent.__dict__.get(name, None)
if callable(func):
return
else:
raise NotImplementedError(f"{obj_type.__name__} does not override {name}")
Note that here set_name is called once the wrapped class is defined, and we can get the parent class of the wrapped class by calling its dunder method bases.
for each for its parent class, we would like to check if the wrapped function is implemented in the class by
check that the function name is in the class dict
it is a callable
Using i would be as simple as:
class AbstractShoppingCartService:
def add_item(self, request: AddItemRequest) -> Cart:
...
class ShoppingCartService(AbstractShoppingCartService):
#override
def add_item(self, request: AddItemRequest) -> Cart:
...
Not only did the decorator I made check if the name of the overriding attribute in is any superclass of the class the attribute is in without having to specify a superclass, this decorator also check to ensure the overriding attribute must be the same type as the overridden attribute. Class Methods are treated like methods and Static Methods are treated like functions. This decorator works for callables, class methods, static methods, and properties.
For source code see: https://github.com/fireuser909/override
This decorator only works for classes that are instances of override.OverridesMeta but if your class is an instance of a custom metaclass use the create_custom_overrides_meta function to create a metaclass that is compatible with the override decorator. For tests, run the override.__init__ module.
In Python 2.6+ and Python 3.2+ you can do it (Actually simulate it, Python doesn't support function overloading and child class automatically overrides parent's method). We can use Decorators for this. But first, note that Python's #decorators and Java's #Annotations are totally different things. The prior one is a wrapper with concrete code while later one is a flag to compiler.
For this, first do pip install multipledispatch
from multipledispatch import dispatch as Override
# using alias 'Override' just to give you some feel :)
class A:
def foo(self):
print('foo in A')
# More methods here
class B(A):
#Override()
def foo(self):
print('foo in B')
#Override(int)
def foo(self,a):
print('foo in B; arg =',a)
#Override(str,float)
def foo(self,a,b):
print('foo in B; arg =',(a,b))
a=A()
b=B()
a.foo()
b.foo()
b.foo(4)
b.foo('Wheee',3.14)
output:
foo in A
foo in B
foo in B; arg = 4
foo in B; arg = ('Wheee', 3.14)
Note that you must have to use decorator here with parenthesis
One thing to remember is that since Python doesn't have function overloading directly, so even if Class B don't inherit from Class A but needs all those foos than also you need to use #Override (though using alias 'Overload' will look better in that case)
Here is a different solution without annotation.
It has a slightly other goal in mind. While the other proposed solutions check if the given method actually overrides a parent, this one checks, if all parent methods were overridden.
You don't have to raise an AssertionError, but can print a warning or disable it in production by checking for the env in __init__ and return before checking.
class Parent:
def a():
pass
def b():
pass
class Child(Overrides, Parent):
def a()
# raises an error, as b() is not overridden
class Overrides:
def __init__(self):
# collect all defined methods of all base-classes
bases = [b for b in self.__class__.__bases__ if b != Overrides]
required_methods = set()
for base in bases:
required_methods = required_methods.union(set([f for f in dir(base) if not f.startswith('_')]))
# check for each method in each base class (in required_methods)
# if the class, that inherits `Overrides` implements them all
missing = []
# me is the fully qualified name of the CLASS, which inherits
# `Overrides`
me = self.__class__.__qualname__
for required_method in required_methods:
# The method can be either defined in the parent or the child
# class. To check it, we get a reference to the method via
# getattr
try:
found = getattr(self, required_method)
except AttributeError:
# this should not happen, as getattr returns the method in
# the parent class if it is not defined in the cild class.
# It has to be in a parent class, as the required_methods
# is a union of all base-class methods.
missing.append(required_method)
continue
# here is, where the magic happens.
# found is a reference to a method, and found.__qualname__ is
# the full-name of the METHOD. Remember, that me is the full
# name of the class.
# We want to check, where the method is defined. If it is
# defined in an parent class, we did no override it, thus it
# is missing.
# If we did not override, the __qualname__ is Parent.method
# If we did override it, the __qualname__ is Child.method
# With this fact, we can determine if the class, which uses
# `Override` did implement it.
if not found.__qualname__.startswith(me + '.'):
missing.append(required_method)
# Maybe a warning would be enough here
if missing != []:
raise AssertionError(f'{me} did not override these methods: {missing}')
Hear is simplest and working under Jython with Java classes:
class MyClass(SomeJavaClass):
def __init__(self):
setattr(self, "name_of_method_to_override", __method_override__)
def __method_override__(self, some_args):
some_thing_to_do()