I'm using init_subclass in a project, and I sort of balked when I ran into the built in method kicking off when the code first runs in the interpreter -- without being directly referenced via instantiation of the containing class or the sub-classes it enumerates.
Can someone tell me what's going on, and point me to any examples of its safe use?
class Timer():
def __init__(self):
pass
def __init_subclass__(cls):
print('Runner.', cls)
print('Timer Dictionary :', Timer.__dict__.keys())
# print(Timer.__init_subclass__()) # Forbidden fruit...
pass
class Event(Timer):
print("I'll take my own bathroom selfies...thanks anyway.")
def __init__(self):
print('This is nice, meeting on a real date.')
if __name__ == '__main__': # a good place for a breakpoint
date = Event()
date
Edit --------------------------------------------------
Based on the explanations received, original code was retooled into something useful.
class Timer():
subclasses = {}
def __init__(self):
pass
def __init_subclass__(cls, **kwargs):
print('Runner.', cls)
print('Timer Dictionary :', Timer.__dict__.keys())
# print(Timer.__init_subclass__()) # Forbidden fruit...
super().__init_subclass__(**kwargs)
cls.subclasses[cls] = []
class Event(Timer):
print("I'll take my own bathroom selfies...thanks anyway.")
def __init__(self):
print('This is nice, meeting on a real date.')
if self.__class__ in super().subclasses:
# get the index and link the two
super().subclasses[self.__class__].append(self)
if __name__ == '__main__': # a good place for a breakpoint
date = Event()
date
duty = Event()
duty
print(Timer.subclasses)
Here's a minimal example:
class Super():
def __init_subclass__(cls):
print(cls)
class Sub(Super):
pass
Running this:
$ python test.py
<class '__main__.Sub'>
Why is that? According to Python's data model docs:
Whenever a class inherits from another class, init_subclass is called on that class.
Sub inherits from Super, so Super.__init_subclass__() gets called.
Specifically, type_new() invokes init_subclass in the cpython implementation.
The rationale is detailed in PEP 487.
Related
I am trying to exectute the below code but I get errors.
class base:
def callme(data):
print(data)
class A(base):
def callstream(self):
B.stream(self)
def callme(data):
print("child ", data)
class B:
def stream(data):
# below statement doesn't work but I want this to run to achieve run time
# polymorphism where method call is not hardcoded to a certain class reference.
(base)data.callme("streaming data")
# below statement works but it won't call child class overridden method. I
# can use A.callme() to call child class method but then it has to be
# hardcoded to A. which kills the purpose. Any class A or B or XYZ which
# inherits base call should be able to read stream data from stream class.
# How to achive this in Python? SO any class should read the stream data as
# long as it inherits from the base class. This will give my stream class a
# generic ability to be used by any client class as long as they inherit
# base class.
#base.callme("streaming data")
def main():
ob = A()
ob.callstream()
if __name__=="__main__":
main()
I got the output you say you're looking for (in a comment rather than the question -- tsk, tsk) with the following code, based on the code in your question:
class base:
def callme(self, data):
print(data)
class A(base):
def callstream(self):
B.stream(self)
def callme(self, data):
print("child", data)
class B:
#classmethod
def stream(cls, data):
data.callme("streaming data")
def main():
ob = A()
ob.callstream()
if __name__=="__main__":
main()
Basically, I just made sure the instance methods had self parameters, and since you seem to be using B.stream() as a class method, I declared it as such.
I found really good example how to add new method to the class dynamically (transplant class):
def say(host, msg):
print '%s says %s' % (host.name, msg)
def funcToMethod(func, clas, method_name=None):
setattr(clas, method_name or func.__name__, func)
class transplant:
def __init__(self, method, host, method_name=None):
self.host = host
self.method = method
setattr(host, method_name or method.__name__, self)
def __call__(self, *args, **kwargs):
nargs = [self.host]
nargs.extend(args)
return apply(self.method, nargs, kwargs)
class Patient:
def __init__(self, name):
self.name = name
if __name__ == '__main__':
jimmy = Patient('Jimmy')
transplant(say, jimmy, 'say1')
funcToMethod(say, jimmy, 'say2')
jimmy.say1('Hello')
jimmy.say2(jimmy, 'Good Bye!')
But I don't understand, how to modify it for adding static methods. Can someone help me?
All you need to do is wrap the function in a staticmethod() call:
say = staticmethod(say)
or apply it as a decorator to the function definition:
#staticmethod
def say(host, msg):
# ...
which comes down to the same thing.
Just remember; the #decorator syntax is just syntactic sugar for writing target = decorator(target), where target is the decorated object.
I don't see a staticmethod here. The say function is expecting two arguments, and the first argument, host, appears to be the instance of the class.
So it seems like you are simply trying to attach a new method to a class. That can be done without funcToMethod or transplant:
def say(self, msg):
print '%s says %s' % (self.name, msg)
class Patient:
def __init__(self, name):
self.name = name
if __name__ == '__main__':
jimmy = Patient('Jimmy')
Patient.say = say
jimmy.say('Hello')
yields
Jimmy says Hello
If you did want to attach a staticmethod, then, as MartijnPieters answered, use the staticmethod decorator:
def tell(msg):
print(msg)
if __name__ == '__main__':
jimmy = Patient('Jimmy')
Patient.tell = staticmethod(tell)
jimmy.tell('Goodbye')
yields
Goodbye
The above shows how new methods can be attached to a class without funcToMethod or transplant. Both funcToMethod and transplant try to attach functions to instances of the class rather than the class itself. This is wrong-headed, which is why it requires contortions (like having to pass jimmy as an argument in jimmy.say2(jimmy, 'Good Bye!')) to make it work. Methods should be defined on the class (e.g. Patient), not on the instance (e.g. jimmy).
transplant is particularly horrible. It uses a class when a function would suffice. It uses the archaic apply instead of the modern self.method(*nargs, **kwargs) syntax, and ignores the PEP8 convention for camelCasing class names. In its defense, it was written over ten years ago. But fundamentally, what makes it an anathema to good programming is that you just don't need it.
Well the following works, ie puts a static method on Patient which I think the OP was wanting.
def tell(msg):
print(msg)
...
funcToMethod(tell, Patient, 'say3')
...
Patient.say3('Bye!')
I'm researching new version of pytest (2.3) and getting very excited about the new functionality where you
"can precisely control teardown by registering one or multiple
teardown functions as soon as they have performed some actions which
need undoing, eliminating the no need for a separate “teardown”
decorator"
from here
It's all pretty clear when it's used as function, but how to use it in the class?
class Test(object):
#pytest.setup(scope='class')
def stp(self):
self.propty = "something"
def test_something(self):
... # some code
# need to add something to the teardown
def test_something_else(self):
... # some code
# need to add even more to the teardown
Ok, I got it working by having a 'session'-wide funcarg finalizer:
#pytest.fixture(scope = "session")
def finalizer():
return Finalizer()
class Finalizer(object):
def __init__(self):
self.fin_funcs = []
def add_fin_func(self, func):
self.fin_funcs.append(func)
def remove_fin_func(self, func):
try:
self.fin_funcs.remove(func)
except:
pass
def execute(self):
for func in reversed(self.fin_funcs):
func()
self.fin_funcs = []
class TestSomething(object):
#classmethod
#pytest.fixture(scope = "class", autouse = True)
def setup(self, request, finalizer):
self.finalizer = finalizer
request.addfinalizer(self.finalizer.execute)
self.finalizer.add_fin_func(lambda: some_teardown())
def test_with_teardown(self):
#some test
self.finalizer.add_fin_func(self.additional_teardown)
def additional_teardown(self):
#additional teardown
Thanks #hpk42 for answering e-mails and helping me get the final version.
NOTE: together with xfailing the rest of the steps and improved scenarios this now makes a pretty good Test-Step structure
Indeed, there are no good examples for teardown yet. The request object has a addfinalizer method. Here is an example usage:
#pytest.setup(scope=...)
def mysetup(request):
...
request.addfinalizer(finalizerfunction)
...
The finalizerfunction will be called when all tests withing the scope finished execution.
I make use of PyCLIPS to integrate CLIPS into Python. Python methods are registered in CLIPS using clips.RegisterPythonFunction(method, optional-name). Since I have to register several functions and want to keep the code clear, I am looking for a decorator to do the registration.
This is how it is done now:
class CLIPS(object):
...
def __init__(self, data):
self.data = data
clips.RegisterPythonFunction(self.pyprint, "pyprint")
def pyprint(self, value):
print self.data, "".join(map(str, value))
and this is how I would like to do it:
class CLIPS(object):
...
def __init__(self, data):
self.data = data
#clips.RegisterPythonFunction(self.pyprint, "pyprint")
#clips_callable
def pyprint(self, value):
print self.data, "".join(map(str, value))
It keeps the coding of the methods and registering them in one place.
NB: I use this in a multiprocessor set-up in which the CLIPS process runs in a separate process like this:
import clips
import multiprocessing
class CLIPS(object):
def __init__(self, data):
self.environment = clips.Environment()
self.data = data
clips.RegisterPythonFunction(self.pyprint, "pyprint")
self.environment.Load("test.clp")
def Run(self, cycles=None):
self.environment.Reset()
self.environment.Run()
def pyprint(self, value):
print self.data, "".join(map(str, value))
class CLIPSProcess(multiprocessing.Process):
def run(self):
p = multiprocessing.current_process()
self.c = CLIPS("%s %s" % (p.name, p.pid))
self.c.Run()
if __name__ == "__main__":
p = multiprocessing.current_process()
c = CLIPS("%s %s" % (p.name, p.pid))
c.Run()
# Now run CLIPS from another process
cp = CLIPSProcess()
cp.start()
it should be fairly simple to do like this:
# mock clips for testing
class clips:
#staticmethod
def RegisterPythonFunction(func, name):
print "register: ", func, name
def clips_callable(fnc):
clips.RegisterPythonFunction(fnc, fnc.__name__)
return fnc
#clips_callable
def test(self):
print "test"
test()
edit: if used on a class method it will register the unbound method only. So it won't work if the function will be called without an instance of the class as the first argument. Therefore this would be usable to register module level functions, but not class methods. To do that, you'll have to register them in __init__.
It seems that the elegant solution proposed by mata wouldn't work because the CLIPS environment should be initialized before registering methods to it.
I'm not a Python expert, but from some searching it seems that combination of inspect.getmembers() and hasattr() will do the trick for you - you could loop all members of your class, and register the ones that have the #clips_callable attribute to CLIPS.
Got it working now by using a decorator to set an attribute on the method to be registered in CLIPS and using inspect in init to fetch the methods and register them. Could have used some naming strategy as well, but I prefer using a decorator to make the registering more explicit. Python functions can be registered before initializing a CLIPS environment. This is what I have done.
import inspect
def clips_callable(func):
from functools import wraps
#wraps(func)
def wrapper(*__args,**__kw):
return func(*__args,**__kw)
setattr(wrapper, "clips_callable", True)
return wrapper
class CLIPS(object):
def __init__(self, data):
members = inspect.getmembers(self, inspect.ismethod)
for name, method in members:
try:
if method.clips_callable:
clips.RegisterPythonFunction(method, name)
except:
pass
...
#clips_callable
def pyprint(self, value):
print self.data, "".join(map(str, value))
For completeness, the CLIPS code in test.clp is included below.
(defrule MAIN::start-me-up
=>
(python-call pyprint "Hello world")
)
If somebody knows a more elegant approach, please let me know.
I need to register an atexit function for use with a class (see Foo below for an example) that, unfortunately, I have no direct way of cleaning up via a method call: other code, that I don't have control over, calls Foo.start() and Foo.end() but sometimes doesn't call Foo.end() if it encounters an error, so I need to clean up myself.
I could use some advice on closures in this context:
class Foo:
def cleanup(self):
# do something here
def start(self):
def do_cleanup():
self.cleanup()
atexit.register(do_cleanup)
def end(self):
# cleanup is no longer necessary... how do we unregister?
Will the closure work properly, e.g. in do_cleanup, is the value of self bound correctly?
How can I unregister an atexit() routine?
Is there a better way to do this?
edit: this is Python 2.6.5
Make a registry a global registry and a function that calls a function in it, and remove them from there when necessary.
cleaners = set()
def _call_cleaners():
for cleaner in list(cleaners):
cleaner()
atexit.register(_call_cleaners)
class Foo(object):
def cleanup(self):
if self.cleaned:
raise RuntimeError("ALREADY CLEANED")
self.cleaned = True
def start(self):
self.cleaned = False
cleaners.add(self.cleanup)
def end(self):
self.cleanup()
cleaners.remove(self.cleanup)
I think the code is fine. There's no way to unregister, but you can set a boolean flag that would disable cleanup:
class Foo:
def __init__(self):
self.need_cleanup = True
def cleanup(self):
# do something here
print 'clean up'
def start(self):
def do_cleanup():
if self.need_cleanup:
self.cleanup()
atexit.register(do_cleanup)
def end(self):
# cleanup is no longer necessary... how do we unregister?
self.need_cleanup = False
Lastly, bear in mind that atexit handlers don't get called if "the program is killed by a signal not handled by Python, when a Python fatal internal error is detected, or when os._exit() is called."
self is bound correctly inside the callback to do_cleanup, but in fact if all you are doing is calling the method you might as well use the bound method directly.
You use atexit.unregister() to remove the callback, but there is a catch here as you must unregister the same function that you registered and since you used a nested function that means you have to store a reference to that function. If you follow my suggestion of using a bound method then you still have to save a reference to it:
class Foo:
def cleanup(self):
# do something here
def start(self):
self._cleanup = self.cleanup # Need to save the bound method for unregister
atexit.register(self._cleanup)
def end(self):
atexit.unregister(self._cleanup)
Note that it is still possible for your code to exit without calling ther atexit registered functions, for example if the process is aborted with ctrl+break on windows or killed with SIGABRT on linux.
Also as another answer suggests you could just use __del__ but that can be problematic for cleanup while a program is exiting as it may not be called until after other globals it needs to access have been deleted.
Edited to note that when I wrote this answer the question didn't specify Python 2.x. Oh well, I'll leave the answer here anyway in case it helps anyone else.
Since shanked deleted his posting, I'll speak in favor of __del__ again:
import atexit, weakref
class Handler:
def __init__(self, obj):
self.obj = weakref.ref(obj)
def cleanup(self):
if self.obj is not None:
obj = self.obj()
if obj is not None:
obj.cleanup()
class Foo:
def __init__(self):
self.start()
def cleanup(self):
print "cleanup"
self.cleanup_handler = None
def start(self):
self.cleanup_handler = Handler(self)
atexit.register(self.cleanup_handler.cleanup)
def end(self):
if self.cleanup_handler is None:
return
self.cleanup_handler.obj = None
self.cleanup()
def __del__(self):
self.end()
a1=Foo()
a1.end()
a1=Foo()
a2=Foo()
del a2
a3=Foo()
a3.m=a3
This supports the following cases:
objects where .end is called regularly; cleanup right away
objects that are released without .end being called; cleanup when the last
reference goes away
objects living in cycles; cleanup atexit
objects that are kept alive; cleanup atexit
Notice that it is important that the cleanup handler holds a weak reference
to the object, as it would otherwise keep the object alive.
Edit: Cycles involving Foo will not be garbage-collected, since Foo implements __del__. To allow for the cycle being deleted at garbage collection time, the cleanup must be taken out of the cycle.
class Cleanup:
cleaned = False
def cleanup(self):
if self.cleaned:
return
print "cleanup"
self.cleaned = True
def __del__(self):
self.cleanup()
class Foo:
def __init__(self):...
def start(self):
self.cleaner = Cleanup()
atexit.register(Handler(self).cleanup)
def cleanup(self):
self.cleaner.cleanup()
def end(self):
self.cleanup()
It's important that the Cleanup object has no references back to Foo.
Why don't you try it? It only took me a minute to check.
(Answer: Yes)
However, you can simplify it. The closure isn't needed.
class Foo:
def cleanup(self):
pass
def start(self):
atexit.register(self.cleanup)
And to not cleanup twice, just check in the cleanup method if a cleanup is needed or not before you clean up.