It is possible to inject a function into a class like this:
class MainClass:
...
def simple_injected_func(self: MainClass, arg: str) -> None:
print(f"simple_injected_func({arg})")
MainClass.simple_injected_func = simple_injected_func
main_object = MainClass()
main_object.simple_injected_func("arg")
# outputs: simple_injected_func(arg)
Furthermore it is possible to make an object callable like this
class SimpleCallableClass:
def __call__(self, arg: str) -> None:
print(f"SimpleCallableClass()({arg})")
simple_callable_object = SimpleCallableClass()
simple_callable_object("arg")
# outputs: SimpleCallableClass()(arg)
I now want to combine these two things and inject a callable class/object as a function into another class while keeping access to object variables and methods of both the CallableClass as well as the MainClass. (Internally I want to use this to effectively implement method inheritance and inject those methods into a class from another file)
from inspect import signature
class CallableClass:
def __call__(self_, self: MainClass, arg: str) -> None:
print(f"CallableClass()({arg})")
callable_object = CallableClass()
MainClass.callable_object = callable_object
main_object = MainClass()
print(signature(simple_injected_func))
# outputs: (self: __main__.MainClass, arg: str) -> None
print(signature(callable_object))
# outputs: (self: __main__.MainClass, arg: str) -> None
print(signature(main_object.simple_injected_func))
# outputs: (arg: str) -> None
print(signature(main_object.callable_object))
# outputs: (self: __main__.MainClass, arg: str) -> None
main_object.simple_injected_func("my arg")
# outputs: simple_injected_func(my arg)
main_object.callable_object("my arg")
# Traceback (most recent call last):
# main_object.callable_object("my arg")
# TypeError: CallableClass.__call__() missing 1 required positional argument: 'arg'
Why does the second self not get correctly stripped in case of the callable object? Is there some way of achieving this?
When methods of an instance are accessed, Python performs "binding", i.e. it creates a bound method. See here:
>>> class Class:
... def method(self, x):
... return x
...
>>>
>>> instance = Class()
>>> Class.method
<function Class.method at 0x7fa688037158>
>>> instance.method
<bound method Class.method of <__main__.Class object at 0x7fa688036278>>
The binding is done because methods are implemented as descriptors.
You can also implement your callable as a descriptor if you want to have that behaviour.
In short, you would have to implement a class with at least a __get__ method. That __get__ method will be called when either Class.method or instance.method is evaluated. It should return the callable (which should be a different one depending on whether there is an instance or not).
BTW, to actually bind a method to an instance, it is simplest to use functors.partial:
bound_method = functors.partial(method, instance)
All summed up:
class Callable:
def __call__(self, instance, arg):
print(f"Callable()(arg)")
class Descriptor:
def __init__(self, callable):
self._callable = callable
def __get__(self, instance, owner):
if instance is None:
return self._callable
else:
return functools.partial(self._callable, instance)
class Class:
pass
Class.method = Descriptor(Callable())
And then:
>>> signature(Class.method)
<Signature (instance, arg)>
>>> signature(Class().method)
<Signature (arg)>
Related
I am trying to declare a base class with certain attributes for which the (very expensive) calculation differs depending on the subclass, but that accepts injecting the value if previously calculated
class Test:
_value1: int | None = None
_value2: str | None = None
_value3: list | None = None
_value4: dict | None = None
#property
def value1(self) -> int:
if self._value1 is None:
self._value1 = self._get_value1()
return self._value1
#value1.setter
def value1(self, value1: int) -> None:
self._value1 = value1
def _get_value1(self) -> int:
raise NotImplementedError
class SubClass(Test):
def _get_value1(self) -> int:
time.sleep(1000000)
return 1
instance = SubClass()
instance.value1 = 1
print(instance.value1) # doesn't wait
As you can see it becomes very verbose, with every property having three different functions associated to it.
Is there a way to dynamically declare at the very least the setter, so that mypy knows it's always the same function but with proper typing? Or in general, is there a more concise way to declare this kind of writable property for which the underlying implementation must be implemented by the base class, in bulk?
Declaring __setattr__ doesn't seem to be viable, because just having __setattr__ declared tricks mpy into thinking I can just assign any value to anything else that's not overloaded, while I still want errors to show up in case I'm trying to assign the wrong attributes. It also doesn't fix that I still need to declare setters, otherwise it thinks the value is immutable.
Instead of inheriting a bunch of pre-defined properties from a base class, I would move all the logic surrounding each property into a custom descriptor class. (The following assumes Python 3.11 and mypy version 1.0.0.)
from typing import TypeVar, Generic, Callable, Type, Optional, Self, Union, overload
T = TypeVar('T')
C = TypeVar('C')
class Descriptor(Generic[C, T]):
def __init__(self, f: Callable[[C], T]):
self.getter = f
def __set_name__(self, owner: C, name: str):
self.private_name = "_" + name
self.public_name = name
#overload
def __get__(self: Self, obj: C, objtype: Optional[Type[C]]) -> T:
...
#overload
def __get__(self: Self, obj: None, objtype: Type[C]) -> Self:
...
def __get__(self: Self, obj: Optional[C], owner: Optional[Type[C]] = None) -> Union[Self, T]:
if obj is None:
return self
if getattr(obj, self.private_name, None) is None:
init_value = self.getter(obj)
self.__set__(obj, init_value)
return getattr(obj, self.private_name)
def __set__(self, obj: C, value: T):
setattr(obj, self.private_name, value)
Then you can define each descriptor similar to how you would define a property, by decorating the function that will return the value an initial value if none has yet been defined.
class Test:
#Descriptor
def value1(self) -> int:
time.sleep(10000000)
return 1
#Descriptor
def value2(self) -> str:
return "foo"
#Descriptor
def value3(self) -> list:
return [1, 2, 3]
#Descriptor
def value4(self) -> dict:
return dict(foo=9)
The descriptor class is generic in both the class it will be used in and the type of the wrapped value.
x = Test()
reveal_type(x.value1) # int
reveal_type(Test.value1) # Descriptor[Test, int]
x.value1 = 3 # OK
x.value1 = "foo" # error, x.__set__ expects an int, not a str
If you wanted to simply omit writing #property.setter (this part)
#value1.setter
def value1(self, value1: int) -> None:
self._value1 = value1
one possible implementation would be to subclass property to automatically implement a __set__ method which matches the behaviour specified in your example:
from __future__ import annotations
import typing as t
if t.TYPE_CHECKING:
import collections.abc as cx
_ValueT = t.TypeVar("_ValueT")
class settable(property, t.Generic[_ValueT]):
fget: cx.Callable[[t.Any], _ValueT]
def __init__(self, fget: cx.Callable[[t.Any], _ValueT], /) -> None:
super().__init__(fget)
if t.TYPE_CHECKING:
# Type-safe descriptor protocol for property retrieval methods (`__get__`)
# see https://docs.python.org/3/howto/descriptor.html
# These are under `typing.TYPE_CHECKING` because we don't need
# to modify their implementation from `builtins.property`, but
# just need to add type-safety.
#t.overload # type: ignore[override, no-overload-impl]
def __get__(self, instance: None, Class: type, /) -> settable[_ValueT]:
"""
Retrieving a property from on a class (`instance: None`) retrieves the
property object (`settable[_ValueT]`)
"""
#t.overload
def __get__(self, instance: object, Class: type, /) -> _ValueT:
"""
Retrieving a property from the instance (all other `typing.overload` cases)
retrieves the value
"""
def __set__(self, instance: t.Any, value: _ValueT) -> None:
"""
Type-safe setter method. Grabs the name of the function first decorated with
`#settable`, then calls `setattr` on the given value with an attribute name of
'_<function name>'.
"""
setattr(instance, f"_{self.fget.__name__}", value)
Here's a demonstration of type-safety:
import time
class Test:
_value1: int | None = None
_value2: str | None = None
_value3: list | None = None
_value4: dict | None = None
#settable
def value1(self) -> int:
if self._value1 is None:
self._value1 = self._get_value1()
return self._value1
def _get_value1(self) -> int:
raise NotImplementedError
class SubClass(Test):
def _get_value1(self) -> int:
time.sleep(1000000)
return 1
>>> instance: SubClass = SubClass()
>>> instance.value1 = 1 # OK
>>>
>>> if t.TYPE_CHECKING:
... reveal_type(instance.value1) # mypy: Revealed type is "builtins.int"
...
>>> print(instance.value1)
1
>>> instance.value1 = "1" # mypy: Incompatible types in assignment (expression has type "str", variable has type "int") [assignment]
>>> SubClass.value1 = 1 # mypy: Cannot assign to a method [assignment]
... # mypy: Incompatible types in assignment (expression has type "int", variable has type "settable[int]") [assignment]
There are subclasses that have the class attribute matcher_function set to a function. During instantiation that function is called and sets another attribute matcher. In all cases the return object of the matcher_function is what matcher gets set to.
Is it possible to create a type hint in the base class BaseResolution that would allow both mypy and pycharm to properly infer matcher is the return value of matcher_function?
# contains_the_text.py
from hamcrest import contains_string
from hamcrest.library.text.stringcontains import StringContains
from .base_resolution import BaseResolution
class ContainsTheText(BaseResolution):
matcher: StringContains # <-- this is what I'm curious can be inferred
matcher_function = contains_string # <-- this function returns an instance
# of `StringContains`
# it would be wonderful if
# a. mypy could detect that the matcher type hint is correct based on 'matcher_function'
# b. infer what the type is when the hint is not present in the subclasses.
# base_resolution.py
from typing import Any, Callable, TypeVar
from hamcrest.core.base_matcher import BaseMatcher, Matcher
from hamcrest.core.description import Description
T = TypeVar("T")
class BaseResolution(BaseMatcher[T]):
matcher: Matcher
matcher_function: Callable
expected: Any
def __init__(self, *args: object, **kwargs: object) -> None:
cls = self.__class__
if args and kwargs:
self.expected = (args, kwargs)
self.matcher = cls.matcher_function(*args, **kwargs)
elif args:
self.expected = args if len(args) > 1 else args[0]
self.matcher = cls.matcher_function(*args)
elif kwargs:
self.expected = kwargs
self.matcher = cls.matcher_function(**kwargs)
else:
self.expected = True
self.matcher = cls.matcher_function()
def _matches(self, item: T) -> bool:
"""passthrough to the matcher's method."""
return self.matcher.matches(item)
# truncated a whole bunch of other methods...
While these are likely better typehints, they didn't seem to do the trick.
class BaseResolution(BaseMatcher[T]):
matcher: Matcher[T]
matcher_function: Callable[..., Matcher[T]]
I know you can do something sorta similar using TypeVar(bound=) which will infer function return types based on arguments passed in. But I can't seem to figure out how (if even possible) to apply that at a class attribute level.
from typing import Type, TypeVar, Generic
T = TypeVar("T")
class Foo(Generic[T]):
...
class FooBar(Foo[T]):
...
F = TypeVar("F", bound=Foo) # any instance subclass of Foo
class MyClass(FooBar):
...
def bar(f: Type[F]) -> F:
...
def baz(f: Type[Foo]) -> Foo:
...
objx = bar(MyClass)
objy = baz(MyClass)
reveal_type(objx) # -> MyClass*
reveal_type(objy) # -> Foo[Any]
Given the above example I tried the following but that clearly isn't right.
F = TypeVar("F", bound=Matcher)
class BaseResolution(BaseMatcher[T]):
matcher: F
matcher_function: Callable[..., F]
# mypy returns
# Type variable "base_resolution.F" is unbound
How to determine if an object is a class method? Isn't it best practice to use isinstance(), and how does one make that work?
class Foo:
class_var = 0
#classmethod
def bar(cls):
cls.class_var += 1
print("class variable value:", cls.class_var)
def wrapper(wrapped: classmethod):
"""
Call the wrapped method.
:param wrapped (classmethod, required)
"""
wrapped()
Foo.bar()
wrapper(Foo.bar)
print("the type is:", type(Foo.bar))
print("instance check success:", isinstance(Foo.bar, classmethod))
Output:
class variable value: 1
class variable value: 2
the type is: <class 'method'>
instance check success: False
Process finished with exit code 0
If you just want to tell class methods apart from regular methods and static methods, then you can check this with inspect.ismethod(f).
class A:
def method(self): pass
#classmethod
def class_method(cls): pass
#staticmethod
def static_method(): pass
In the REPL:
>>> from inspect import ismethod
>>> ismethod(A.method)
False
>>> ismethod(A.class_method)
True
>>> ismethod(A.static_method)
False
If you prefer to do this with isinstance, then that's possible using typing.types.MethodType:
>>> from typing import types
>>> isinstance(A.method, types.MethodType)
False
>>> isinstance(A.class_method, types.MethodType)
True
>>> isinstance(A.static_method, types.MethodType)
False
Note that these tests will incorrectly identify e.g. A().method because really we're just testing for a bound method as opposed to an unbound function. So the above solutions only work assuming that you are checking A.something where A is a class and something is either a regular method, a class method or a static method.
As you know Python fills the first parameter of the classmethods with a reference to the class itself and it doesn't matter if you call that method from the class or the instance of the class. A method object is a function which has an object bound to it.
That object can be retrieved by .__self__ attribute. So you can simply check that if the .__self__ attribute is a class or not. If it is a class , it's class is type.
One way of doing it:
class Foo:
#classmethod
def fn1(cls):
pass
def fn2(self):
pass
def is_classmethod(m):
first_parameter = getattr(m, '__self__', None)
if not first_parameter:
return False
type_ = type(first_parameter)
return type_ is type
print(is_classmethod(Foo.fn1))
print(is_classmethod(Foo().fn1))
print("-----------------------------------")
print(is_classmethod(Foo.fn2))
print(is_classmethod(Foo().fn2))
output:
True
True
-----------------------------------
False
False
There is a ismethod function in inspect module that specifically checks that if the object is a bound method. You can use this as well before checking for the type of the first parameter.
NOTE: There is a caveat with the above solution, I'll mention it at the end.
Solution number 2:
Your isinstance solution didn't work because classmethod is a descriptor. If you want to get the actual classmethod instance, you should check the Foo's namespace and get the methods from there.
class Foo:
#classmethod
def fn1(cls):
pass
def fn2(self):
pass
def is_classmethod(cls, m):
return isinstance(cls.__dict__[m.__name__], classmethod)
print(is_classmethod(Foo, Foo.fn1))
print(is_classmethod(Foo, Foo().fn1))
print("-----------------------------------")
print(is_classmethod(Foo, Foo.fn2))
print(is_classmethod(Foo, Foo().fn2))
Solution number 1 caveat: For example if you have a simple MethodType object whose bound object is a different class like int here, this solution isn't going to work. Because remember we just checked that if the first parameter is of type type:
from types import MethodType
class Foo:
def fn2(self):
pass
fn2 = MethodType(fn2, int)
#classmethod
def fn1(cls):
pass
Now only solution number 2 works.
assume following class definition:
class A:
def f(self):
return 'this is f'
#staticmethod
def g():
return 'this is g'
a = A()
So f is a normal method and g is a static method.
Now, how can I check if the funcion objects a.f and a.g are static or not? Is there a "isstatic" funcion in Python?
I have to know this because I have lists containing many different function (method) objects, and to call them I have to know if they are expecting "self" as a parameter or not.
Lets experiment a bit:
>>> import types
>>> class A:
... def f(self):
... return 'this is f'
... #staticmethod
... def g():
... return 'this is g'
...
>>> a = A()
>>> a.f
<bound method A.f of <__main__.A instance at 0x800f21320>>
>>> a.g
<function g at 0x800eb28c0>
>>> isinstance(a.g, types.FunctionType)
True
>>> isinstance(a.f, types.FunctionType)
False
So it looks like you can use types.FunctionType to distinguish static methods.
Your approach seems a bit flawed to me, but you can check class attributes:
(in Python 2.7):
>>> type(A.f)
<type 'instancemethod'>
>>> type(A.g)
<type 'function'>
or instance attributes in Python 3.x
>>> a = A()
>>> type(a.f)
<type 'method'>
>>> type(a.g)
<type 'function'>
To supplement the answers here, in Python 3 the best way is like so:
import inspect
class Test:
#staticmethod
def test(): pass
isstatic = isinstance(inspect.getattr_static(Test, "test"), staticmethod)
We use getattr_static rather than getattr, since getattr will retrieve the bound method or function, not the staticmethod class object. You can do a similar check for classmethod types and property's (e.g. attributes defined using the #property decorator)
Note that even though it is a staticmethod, don't assume it was defined inside the class. The method source may have originated from another class. To get the true source, you can look at the underlying function's qualified name and module. For example:
class A:
#staticmethod:
def test(): pass
class B: pass
B.test = inspect.getattr_static(A, "test")
print("true source: ", B.test.__qualname__)
Technically, any method can be used as "static" methods, so long as they are called on the class itself, so just keep that in mind. For example, this will work perfectly fine:
class Test:
def test():
print("works!")
Test.test()
That example will not work with instances of Test, since the method will be bound to the instance and called as Test.test(self) instead.
Instance and class methods can be used as static methods as well in some cases, so long as the first arg is handled properly.
class Test:
def test(self):
print("works!")
Test.test(None)
Perhaps another rare case is a staticmethod that is also bound to a class or instance. For example:
class Test:
#classmethod
def test(cls): pass
Test.static_test = staticmethod(Test.test)
Though technically it is a staticmethod, it is really behaving like a classmethod. So in your introspection, you may consider checking the __self__ (recursively on __func__) to see if the method is bound to a class or instance.
I happens to have a module to solve this. And it's Python2/3 compatible solution. And it allows to test with method inherit from parent class.
Plus, this module can also test:
regular attribute
property style method
regular method
staticmethod
classmethod
For example:
class Base(object):
attribute = "attribute"
#property
def property_method(self):
return "property_method"
def regular_method(self):
return "regular_method"
#staticmethod
def static_method():
return "static_method"
#classmethod
def class_method(cls):
return "class_method"
class MyClass(Base):
pass
Here's the solution for staticmethod only. But I recommend to use the module posted here.
import inspect
def is_static_method(klass, attr, value=None):
"""Test if a value of a class is static method.
example::
class MyClass(object):
#staticmethod
def method():
...
:param klass: the class
:param attr: attribute name
:param value: attribute value
"""
if value is None:
value = getattr(klass, attr)
assert getattr(klass, attr) == value
for cls in inspect.getmro(klass):
if inspect.isroutine(value):
if attr in cls.__dict__:
bound_value = cls.__dict__[attr]
if isinstance(bound_value, staticmethod):
return True
return False
Why bother? You can just call g like you call f:
a = A()
a.f()
a.g()
I have a class that has several methods which each have certain properties (in the sense of quality). I'd like these methods to be available in a list inside the class so they can be executed at once. Note that the properties can be interchangeable so this can't be solved by using further classes that would inherit from the original one. In an ideal world it would look something like this:
class MyClass:
def __init__():
red_rules = set()
blue_rules = set()
hard_rules = set()
soft_rules = set()
#red
def rule_one(self):
return 1
#blue
#hard
def rule_two(self):
return 2
#hard
def rule_three(self):
return 3
#blue
#soft
def rule_four(self):
return 4
When the class is instantiated, it should be easy to simply execute all red and soft rules by combining the sets and executing the methods. The decorators for this are tricky though since a regular registering decorator can fill out a global object but not the class attribute:
def red(fn):
red_rules.add(fn)
return fn
How do I go about implementing something like this?
You can subclass set and give it a decorator method:
class MySet(set):
def register(self, method):
self.add(method)
return method
class MyClass:
red_rules = MySet()
blue_rules = MySet()
hard_rules = MySet()
soft_rules = MySet()
#red_rules.register
def rule_one(self):
return 1
#blue_rules.register
#hard_rules.register
def rule_two(self):
return 2
#hard_rules.register
def rule_three(self):
return 3
#blue_rules.register
#soft_rules.register
def rule_four(self):
return 4
Or if you find using the .register method ugly, you can always define the __call__ method to use the set itself as a decorator:
class MySet(set):
def __call__(self, method):
"""Use set as a decorator to add elements to it."""
self.add(method)
return method
class MyClass:
red_rules = MySet()
...
#red_rules
def rule_one(self):
return 1
...
This looks better, but it's less explicit, so for other collaborators (or future yourself) it might be harder to grasp what's happening here.
To call the stored functions, you can just loop over the set you want and pass in the instance as the self argument:
my_instance = MyClass()
for rule in MyClass.red_rules:
rule(my_instance)
You can also create an utility function to do this for you, for example you can create a MySet.invoke() method:
class MySet(set):
...
def invoke(self, obj):
for rule in self:
rule(obj)
And now just call:
MyClass.red_rules.invoke(my_instance)
Or you could have MyClass handle this instead:
class MyClass:
...
def invoke_rules(self, rules):
for rule in rules:
rule(self)
And then call this on an instance of MyClass:
my_instance.invoke_rules(MyClass.red_rules)
Decorators are applied when the function is defined; in a class that's when the class is defined. At this point in time there are no instances yet!
You have three options:
Register your decorators at the class level. This is not as clean as it may sound; you either have to explicitly pass additional objects to your decorators (red_rules = set(), then #red(red_rules) so the decorator factory can then add the function to the right location), or you have to use some kind of class initialiser to pick up specially marked functions; you could do this with a base class that defines the __init_subclass__ class method, at which point you can iterate over the namespace and find those markers (attributes set by the decorators).
Have your __init__ method (or a __new__ method) loop over all the methods on the class and look for special attributes the decorators have put there.
The decorator would only need to add a _rule_name or similar attribute to decorated methods, and {getattr(self, name) for for name in dir(self) if getattr(getattr(self, name), '_rule_name', None) == rule_name} would pick up any method that has the right rule name defined in rule_name.
Make your decorators produce new descriptor objects; descriptors have their __set_name__() method called when the class object is created. This gives you access to the class, and thus you can add attributes to that class.
Note that __init_subclass__ and __set_name__ require Python 3.6 or newer; you'd have to resort to a metaclass to achieve similar functionality in earlier versions.
Also note that when you register functions at the class level, that you need to then explicitly bind them with function.__get__(self, type(cls)) to turn them into methods, or you can explicitly pass in self when calling them. You could automate this by making a dedicated class to hold the rule sets, and make this class a descriptor too:
import types
from collections.abc import MutableSet
class RulesSet(MutableSet):
def __init__(self, values=(), rules=None, instance=None, owner=None):
self._rules = rules or set() # can be a shared set!
self._instance = instance
self._owner = owner
self |= values
def __repr__(self):
bound = ''
if self._owner is not None:
bound = f', instance={self._instance!r}, owner={self._owner!r}'
rules = ', '.join([repr(v) for v in iter(self)])
return f'{type(self).__name__}({{{rules}}}{bound})'
def __contains__(self, ob):
try:
if ob.__self__ is self._instance or ob.__self__ is self._owner:
# test for the unbound function instead when both are bound, this requires staticmethod and classmethod to be unwrapped!
ob = ob.__func__
return any(ob is getattr(f, '__func__', f) for f in self._rules)
except AttributeError:
# not a method-like object
pass
return ob in self._rules
def __iter__(self):
if self._owner is not None:
return (f.__get__(self._instance, self._owner) for f in self._rules)
return iter(self._rules)
def __len__(self):
return len(self._rules)
def add(self, ob):
while isinstance(ob, Rule):
# remove any rule wrappers
ob = ob._function
assert isinstance(ob, (types.FunctionType, classmethod, staticmethod))
self._rules.add(ob)
def discard(self, ob):
self._rules.discard(ob)
def __get__(self, instance, owner):
# share the set with a new, bound instance.
return type(self)(rules=self._rules, instance=instance, owner=owner)
class Rule:
#classmethod
def make_decorator(cls, rulename):
ruleset_name = f'{rulename}_rules'
def decorator(f):
return cls(f, ruleset_name)
decorator.__name__ = rulename
return decorator
def __init__(self, function, ruleset_name):
self._function = function
self._ruleset_name = ruleset_name
def __get__(self, *args):
# this is mostly here just to make Python call __set_name__
return self._function.__get__(*args)
def __set_name__(self, owner, name):
# register, then replace the name with the original function
# to avoid being a performance bottleneck
ruleset = getattr(owner, self._ruleset_name, None)
if ruleset is None:
ruleset = RulesSet()
setattr(owner, self._ruleset_name, ruleset)
ruleset.add(self)
# transfer controrol to any further rule objects
if isinstance(self._function, Rule):
self._function.__set_name__(owner, name)
else:
setattr(owner, name, self._function)
red = Rule.make_decorator('red')
blue = Rule.make_decorator('blue')
hard = Rule.make_decorator('hard')
soft = Rule.make_decorator('soft')
Then just use:
class MyClass:
#red
def rule_one(self):
return 1
#blue
#hard
def rule_two(self):
return 2
#hard
def rule_three(self):
return 3
#blue
#soft
def rule_four(self):
return 4
and you can access self.red_rules, etc. as a set with bound methods:
>>> inst = MyClass()
>>> inst.red_rules
RulesSet({<bound method MyClass.rule_one of <__main__.MyClass object at 0x106fe7550>>}, instance=<__main__.MyClass object at 0x106fe7550>, owner=<class '__main__.MyClass'>)
>>> inst.blue_rules
RulesSet({<bound method MyClass.rule_two of <__main__.MyClass object at 0x106fe7550>>, <bound method MyClass.rule_four of <__main__.MyClass object at 0x106fe7550>>}, instance=<__main__.MyClass object at 0x106fe7550>, owner=<class '__main__.MyClass'>)
>>> inst.hard_rules
RulesSet({<bound method MyClass.rule_three of <__main__.MyClass object at 0x106fe7550>>, <bound method MyClass.rule_two of <__main__.MyClass object at 0x106fe7550>>}, instance=<__main__.MyClass object at 0x106fe7550>, owner=<class '__main__.MyClass'>)
>>> inst.soft_rules
RulesSet({<bound method MyClass.rule_four of <__main__.MyClass object at 0x106fe7550>>}, instance=<__main__.MyClass object at 0x106fe7550>, owner=<class '__main__.MyClass'>)
>>> for rule in inst.hard_rules:
... rule()
...
2
3
The same rules are accessible on the class; normal functions remain unbound:
>>> MyClass.blue_rules
RulesSet({<function MyClass.rule_two at 0x107077a60>, <function MyClass.rule_four at 0x107077b70>}, instance=None, owner=<class '__main__.MyClass'>)
>>> next(iter(MyClass.blue_rules))
<function MyClass.rule_two at 0x107077a60>
Containment testing works as expected:
>>> inst.rule_two in inst.hard_rules
True
>>> inst.rule_two in inst.soft_rules
False
>>> MyClass.rule_two in MyClass.hard_rules
True
>>> MyClass.rule_two in inst.hard_rules
True
You can use these rules to register classmethod and staticmethod objects too:
>>> class Foo:
... #hard
... #classmethod
... def rule_class(cls):
... return f'rule_class of {cls!r}'
...
>>> Foo.hard_rules
RulesSet({<bound method Foo.rule_class of <class '__main__.Foo'>>}, instance=None, owner=<class '__main__.Foo'>)
>>> next(iter(Foo.hard_rules))()
"rule_class of <class '__main__.Foo'>"
>>> Foo.rule_class in Foo.hard_rules
True