I'm coding in Python, but the question seems independent of programming language.
I have a class that represents some system check:
class Check:
#abstractmethod
def run()
""" You have to define your own run(). As a result, it must set self._ok. """
...
#property
def is_ok():
return self._is_ok
Then we have a set of checks by subclassing Check class, and they're used in the following way (simplified):
class Checker:
checks = [check1, check2...]
def __call__(self):
for check in self.checks:
if not check.is_ok:
alarm()
The question is: Is it fine to oblige subclass to set some protected object attributes?
I guess it may be acceptable.
You could also consider to let the subclasses implement their own is_ok() method rather then modify a protected attribute, so they can implement their own check strategy.
Related
I'm trying to create a base class with a number of abstract python properties, in python 3.7.
I tried it one way (see 'start' below) using the #property, #abstractmethod, #property.setter annotations. This worked but it doesn't raise an exception if the subclass doesn't implement a setter. That's the point of using #abstract to me, so that's no good.
So I tried doing it another way (see 'end' below) using two #abstractmethod methods and a 'property()', which is not abstract itself but uses those methods. This approach generates an error when instantiating the subclass:
# {TypeError}Can't instantiate abstract class FirstStep with abstract methods end
I'm clearly implementing the abstract methods, so I don't understand what it means. The 'end' property is not marked #abstract, but if I comment it out, it does run (but I don't get my property). I also added that test non-abstract method 'test_elapsed_time' to demonstrate I have the class structure and abstraction right (it works).
Any chance I'm doing something dumb, or is there some special behavior around property() that's causing this?
class ParentTask(Task):
def get_first_step(self):
# {TypeError}Can't instantiate abstract class FirstStep with abstract methods end
return FirstStep(self)
class Step(ABC):
# __metaclass__ = ABCMeta
def __init__(self, task):
self.task = task
# First approach. Works, but no warnings if don't implement setter in subclass
#property
#abstractmethod
def start(self):
pass
#start.setter
#abstractmethod
def start(self, value):
pass
# Second approach. "This method for 'end' may look slight messier, but raises errors if not implemented.
#abstractmethod
def get_end(self):
pass
#abstractmethod
def set_end(self, value):
pass
end = property(get_end, set_end)
def test_elapsed_time(self):
return self.get_end() - self.start
class FirstStep(Step):
#property
def start(self):
return self.task.start_dt
# No warnings if this is commented out.
#start.setter
def start(self, value):
self.task.start_dt = value
def get_end(self):
return self.task.end_dt
def set_end(self, value):
self.task.end_dt = value
I suspect this is a bug in the interaction of abstract methods and properties.
In your base class, the following things happen, in order:
You define an abstract method named start.
You create a new property that uses the abstract method from 1) as its getter. The name start now refers to this property, with the only reference to the original name now held by Self.start.fget.
Python saves a temporary reference to start.setter, because the name start is about to be bound to yet another object.
You create a second abstract method named start
The reference from 3) is given the abstract method from 4) to define a new property to replace the once currently bound to the name start. This property has as its getter the method from 1 and as its setter the method from 4). Now start refers to this property; start.fget refers to the method from 1); start.fset refers to the method from 4).
At this point, you have a property, whose component functions are abstract methods. The property itself was not decorated as abstract, but the definition of property.__isabstractmethod__ marks it as such because all its component methods are abstract. More importantly, you have the following entries in Step.__abstractmethods__:
start, the property
end, the property
set_end, the setter for end
gen_end, the getter for end
Note that the component functions for the start property are missing, because __abstractmethods__ stores names of, not references to, things that need to be overriden. Using property and the resulting property's setter method as decorators repeatedly replace what the name start refers to.
Now, in your child class, you define a new property named start, shadowing the inherited property, which has no setter and a concrete method as its getter. At this point, it doesn't matter if you provide a setter for this property or not, because as far as the abc machinery is concerned, you have provided everything it asked for:
A concrete method for the name start
Concrete methods for the names get_end and set_end
Implicitly a concrete definition for the name end, because all of the underlying functions for the property end have been provided concrete definitions.
#chepner answered and explained it well. Based on that, I came up with a way around it that is... well... you decide. Sneaky at best. But it achieves my 3 main goals:
Raises exceptions for unimplemented setters in subclasses
Supports the python property semantics (vs. functions etc)
Avoids boilerplate re-declaring every property in every subclass which still might not have solved #1 anyway.
Just declare the abstract get/set functions in the base class (not the property). Then add a #classmethod initializer to the base class that creates the actual properties using those abstract methods, but at that point, they'll be concrete methods on the subclass.
It's a one liner after the subclass declaration to init the properties. Nothing enforces that call being made, so it's not ironclad. Not a big savings in this example, but I'll have many properties. The end results doesn't look as dirty as I thought it would. Would like to hear comments or warnings of things I'm overlooking.
from abc import abstractmethod, ABC
class ParentTask(object):
def __init__(self):
self.first_step = FirstStep(self)
self.second_step = SecondStep(self)
print(self.first_step.end)
print(self.second_step.end)
class Step(ABC):
def __init__(self, task):
self.task = task
#classmethod
def init_properties(cls):
cls.end = property(cls.get_end, cls.set_end)
#abstractmethod
def get_end(self):
pass
#abstractmethod
def set_end(self, value):
pass
class FirstStep(Step):
def get_end(self):
return 1
def set_end(self, value):
self.task.end = value
class SecondStep(Step):
def get_end(self):
return 2
def set_end(self, value):
self.task.end = value
FirstStep.init_properties()
SecondStep.init_properties()
ParentTask()
I have a class that implements a strategy. As part of a wider strategy API it has a public interface. In this particular class the main_method applies various conditions and has a helper_... method for each condition. Thus, by subclassing and overriding these helper methods you can change the behaviour of the strategy. This is intended. However, these helper methods are not part of the API that is being implemented/exposed to the client.
It seems to me that these methods should be considered private, as they are not part of the interface that is being implemented, but on the other hand they are intended to be overridden by subclasses. In Java they would be "protected".
What's the pythonic way to deal with this situation? My code is schematically similar to the following:
class BasicFoo(Object):
def __init__(self):
pass
def main_method(self, input) :
if condition_1 :
self._helper1(input)
elif condition_2 :
self._helper2(input)
else :
self._helper3(input)
def _helper1(self, input)
# do something
def _helper2(self, input):
# do something else
def _helper3(self, input):
# do something else again
class ModifiedFoo(BasicFoo):
def __init__(self):
super(ModifiedFoo, self).__init__()
def _helper1(self, input):
# a different behaviour
Just make the methods public and clarify their intended use in the documentation. Take the example from the ast.NodeVisitor class from the standard library:
This class is meant to be subclassed, with the subclass adding visitor
methods.
visit(node)
Visit a node. The default implementation calls the method called self.visit_classname where classname is the name of the node class, or
generic_visit() if that method doesn’t exist.
Given that in Python you don't have the access modifiers you can only rely on documentation and conventions to clarify the roles of such methods.
In this case your methods are part of the public API (a private attribute/method could be renamed at leisure between releases, but this isn't the case here).
I have a class sysprops in which I'd like to have a number of constants. However, I'd like to pull the values for those constants from the database, so I'd like some sort of hook any time one of these class constants are accessed (something like the getattribute method for instance variables).
class sysprops(object):
SOME_CONSTANT = 'SOME_VALUE'
sysprops.SOME_CONSTANT # this statement would not return 'SOME_VALUE' but instead a dynamic value pulled from the database.
Although I think it is a very bad idea to do this, it is possible:
class GetAttributeMetaClass(type):
def __getattribute__(self, key):
print 'Getting attribute', key
class sysprops(object):
__metaclass__ = GetAttributeMetaClass
While the other two answers have a valid method. I like to take the route of 'least-magic'.
You can do something similar to the metaclass approach without actually using them. Simply by using a decorator.
def instancer(cls):
return cls()
#instancer
class SysProps(object):
def __getattribute__(self, key):
return key # dummy
This will create an instance of SysProps and then assign it back to the SysProps name. Effectively shadowing the actual class definition and allowing a constant instance.
Since decorators are more common in Python I find this way easier to grasp for other people that have to read your code.
sysprops.SOME_CONSTANT can be the return value of a function if SOME_CONSTANT were a property defined on type(sysprops).
In other words, what you are talking about is commonly done if sysprops were an instance instead of a class.
But here is the kicker -- classes are instances of metaclasses. So everything you know about controlling the behavior of instances through the use of classes applies equally well to controlling the behavior of classes through the use of metaclasses.
Usually the metaclass is type, but you are free to define other metaclasses by subclassing type. If you place a property SOME_CONSTANT in the metaclass, then the instance of that metaclass, e.g. sysprops will have the desired behavior when Python evaluates sysprops.SOME_CONSTANT.
class MetaSysProps(type):
#property
def SOME_CONSTANT(cls):
return 'SOME_VALUE'
class SysProps(object):
__metaclass__ = MetaSysProps
print(SysProps.SOME_CONSTANT)
yields
SOME_VALUE
For example, I have a
class BaseHandler(object):
def prepare(self):
self.prepped = 1
I do not want everyone that subclasses BaseHandler and also wants to implement prepare to have to remember to call
super(SubBaseHandler, self).prepare()
Is there a way to ensure the superclass method is run even if the subclass also implements prepare?
I have solved this problem using a metaclass.
Using a metaclass allows the implementer of the BaseHandler to be sure that all subclasses will call the superclasses prepare() with no adjustment to any existing code.
The metaclass looks for an implementation of prepare on both classes and then overwrites the subclass prepare with one that calls superclass.prepare followed by subclass.prepare.
class MetaHandler(type):
def __new__(cls, name, bases, attrs):
instance = type.__new__(cls, name, bases, attrs)
super_instance = super(instance, instance)
if hasattr(super_instance, 'prepare') and hasattr(instance, 'prepare'):
super_prepare = getattr(super_instance, 'prepare')
sub_prepare = getattr(instance, 'prepare')
def new_prepare(self):
super_prepare(self)
sub_prepare(self)
setattr(instance, 'prepare', new_prepare)
return instance
class BaseHandler(object):
__metaclass__ = MetaHandler
def prepare(self):
print 'BaseHandler.prepare'
class SubHandler(BaseHandler):
def prepare(self):
print 'SubHandler.prepare'
Using it looks like this:
>>> sh = SubHandler()
>>> sh.prepare()
BaseHandler.prepare
SubHandler.prepare
Tell your developers to define prepare_hook instead of prepare, but
tell the users to call prepare:
class BaseHandler(object):
def prepare(self):
self.prepped = 1
self.prepare_hook()
def prepare_hook(self):
pass
class SubBaseHandler(BaseHandler):
def prepare_hook(self):
pass
foo = SubBaseHandler()
foo.prepare()
If you want more complex chaining of prepare calls from multiple subclasses, then your developers should really use super as that's what it was intended for.
Just accept that you have to tell people subclassing your class to call the base method when overriding it. Every other solution either requires you to explain them to do something else, or involves some un-pythonic hacks which could be circumvented too.
Python’s object inheritance model was designed to be open, and any try to go another way will just overcomplicate the problem which does not really exist anyway. Just tell everybody using your stuff to either follow your “rules”, or the program will mess up.
One explicit solution without too much magic going on would be to maintain a list of prepare call-backs:
class BaseHandler(object):
def __init__(self):
self.prepare_callbacks = []
def register_prepare_callback(self, callback):
self.prepare_callbacks.append(callback)
def prepare(self):
# Do BaseHandler preparation
for callback in self.prepare_callbacks:
callback()
class MyHandler(BaseHandler):
def __init__(self):
BaseHandler.__init__(self)
self.register_prepare_callback(self._prepare)
def _prepare(self):
# whatever
In general you can try using __getattribute__ to achive something like this (until the moment someone overwrites this method too), but it is against the Python ideas. There is a reason to be able to access private object members in Python. The reason is mentioned in import this
I have a module (db.py) which loads data from different database types (sqlite,mysql etc..) the module contains a class db_loader and subclasses (sqlite_loader,mysql_loader) which inherit from it.
The type of database being used is in a separate params file,
How does the user get the right object back?
i.e how do I do:
loader = db.loader()
Do I use a method called loader in the db.py module or is there a more elegant way whereby a class can pick its own subclass based on a parameter? Is there a standard way to do this kind of thing?
Sounds like you want the Factory Pattern. You define a factory method (either in your module, or perhaps in a common parent class for all the objects it can produce) that you pass the parameter to, and it will return an instance of the correct class. In python the problem is a bit simpler than perhaps some of the details on the wikipedia article as your types are dynamic.
class Animal(object):
#staticmethod
def get_animal_which_makes_noise(noise):
if noise == 'meow':
return Cat()
elif noise == 'woof':
return Dog()
class Cat(Animal):
...
class Dog(Animal):
...
is there a more elegant way whereby a class can pick its own subclass based on a parameter?
You can do this by overriding your base class's __new__ method. This will allow you to simply go loader = db_loader(db_type) and loader will magically be the correct subclass for the database type. This solution is mildly more complicated than the other answers, but IMHO it is surely the most elegant.
In its simplest form:
class Parent():
def __new__(cls, feature):
subclass_map = {subclass.feature: subclass for subclass in cls.__subclasses__()}
subclass = subclass_map[feature]
instance = super(Parent, subclass).__new__(subclass)
return instance
class Child1(Parent):
feature = 1
class Child2(Parent):
feature = 2
type(Parent(1)) # <class '__main__.Child1'>
type(Parent(2)) # <class '__main__.Child2'>
(Note that as long as __new__ returns an instance of cls, the instance's __init__ method will automatically be called for you.)
This simple version has issues though and would need to be expanded upon and tailored to fit your desired behaviour. Most notably, this is something you'd probably want to address:
Parent(3) # KeyError
Child1(1) # KeyError
So I'd recommend either adding cls to subclass_map or using it as the default, like so subclass_map.get(feature, cls). If your base class isn't meant to be instantiated -- maybe it even has abstract methods? -- then I'd recommend giving Parent the metaclass abc.ABCMeta.
If you have grandchild classes too, then I'd recommend putting the gathering of subclasses into a recursive class method that follows each lineage to the end, adding all descendants.
This solution is more beautiful than the factory method pattern IMHO. And unlike some of the other answers, it's self-maintaining because the list of subclasses is created dynamically, instead of being kept in a hardcoded mapping. And this will only instantiate subclasses, unlike one of the other answers, which would instantiate anything in the global namespace matching the given parameter.
I'd store the name of the subclass in the params file, and have a factory method that would instantiate the class given its name:
class loader(object):
#staticmethod
def get_loader(name):
return globals()[name]()
class sqlite_loader(loader): pass
class mysql_loader(loader): pass
print type(loader.get_loader('sqlite_loader'))
print type(loader.get_loader('mysql_loader'))
Store the classes in a dict, instantiate the correct one based on your param:
db_loaders = dict(sqlite=sqlite_loader, mysql=mysql_loader)
loader = db_loaders.get(db_type, default_loader)()
where db_type is the paramter you are switching on, and sqlite_loader and mysql_loader are the "loader" classes.