My experiment code is like:
import signal
def hi(signum, frame):
print "hi"
signal.signal(signal.SIGINT, hi)
signal.signal(signal.SIGINT, signal.SIG_IGN)
hi didn't get printed, because the signal handler is overridden by signal.SIG_IGN.
How can I avoid this?
You could try to check whether there is already a handler. If so put your desired handler and the old handler in a wrapper function that calls both of them.
def append_signal(sig, f):
old = None
if callable(signal.getsignal(sig)):
old = signal.getsignal(sig)
def helper(*args, **kwargs):
if old is not None:
old(*args, **kwargs)
f(*args, **kwargs)
signal.signal(sig, helper)
If you don't want to override your own handler, check to see if you've set one:
if signal.getsignal(signal.SIGINT) in [signal.SIG_IGN, signal.SIG_DFL]:
signal.signal(signal.SIGINT, hi)
According to the documentation, is is possible that some superior process had already reassigned the handler from the default. If you don't want to override that, add None to the list of signals.
The obvious wrapper for signal.signal(..., signal.SIG_IGN) would be a not in test.
added in response to comment
Chaining signal handlers is not often done because signals are so granular. If I really wanted to do this, I'd follow the model of atexit and register functions to be called by your handler.
Simply do the same as it would be done in C:
sig_hand_prev = None
def signal_handler(signum, frame):
...
signal.signal(signum, sig_hand_prev)
os.kill(os.getpid(), signum)
def install_handler(signum):
global sig_hand_prev
sig_hand_prev = signal.signal(signum, signal_handler)
The key idea here is that you save only the previous handler and raise it again when you finished your stuff. In this way the signal handling is a single linked list OOB.
Related
I understand that __init__() is called automatically when you create a class like newThread = MyThread(property) and run() is triggered by newthread.start(). What I am looking for is something that is called automatically before a thread terminates, so I don't have to explicitly call self.cleanUp() before each return statement.
class MyThread(Thread):
def __init__(self, property):
Thread.__init__(self)
self.property = property
def cleanUp(self):
# Clean up here
def run(self):
# Do some stuff
self.cleanUp() # Current work around
return
One way to do this is by making the Thread subclass also a context manager. This will effectively make __exit__() the special method you want triggered.
The following shows what I'm proposing. Note: I renamed the property argument you were passing the constructor because property is the name of a Python built-in.
from threading import Thread
import time
TEST_THREAD_EXCEPTION = False # change as desired
class MyThread(Thread):
def __init__(self, attribute):
Thread.__init__(self)
self.attribute = attribute
def cleanup(self):
# Clean up here
print(' cleaning up after thread')
def run(self):
if TEST_THREAD_EXCEPTION:
raise RuntimeError('OOPS!') # force exception
print(' other thread now running...')
time.sleep(2) # Do something...
def __enter__(self):
try:
self.run()
except Exception as exc:
print('Error: {} exception raised by thread'.format(exc))
raise # reraise the exception
return self
def __exit__(self, *args):
self.cleanup()
print('main thread begins execution')
with MyThread('hello') as thread:
print('doing other things in main thread while other thread is running')
print('main thread continuing...')
Output:
main thread begins execution
other thread now running...
doing other things in main thread while other thread is running
cleaning up after thread
main thread continuing on...
If you change TEST_THREAD_EXCEPTION to True, cleanup() won't be called since the thread didn't run successfully—although you could change that if you wished, but may also need to ensure that it doesn't get called twice. Here's what the code above does in that case:
main thread begins execution
Error: OOPS! exception raised by thread
Traceback (most recent call last):
File "opposite_init.py", line 37, in <module>
with MyThread('hello') as thread:
File "opposite_init.py", line 27, in __enter__
self.run()
File "opposite_init.py", line 21, in run
raise RuntimeError('OOPS!') # force exception
RuntimeError: OOPS!
As stated in the Python mailing list, __del__ shouldn't be considered the opposite, but you can use the with syntax, which is a context manager
you cannot be sure that an object's destructor (__del__() ) will ever
be called. If you want to make sure that a particular object gets
processed, one approach is the with- syntax.
Or you can also look into the try...finally clause, in which the finally statement will always get run.
class MyThread(Thread):
def __init__(self, property):
Thread.__init__(self)
self.property = property
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
print('starting cleanup')
# Clean up here
def run(self):
# Do some stuff
return
# not now you can call it like this:
with MyThread("spam") as spam:
print("The thread is running")
# you can also do stuff here
You can use the try...finally clause like so:
class MyThread(Thread):
def __init__(self, property):
Thread.__init__(self)
self.property = property
def cleanUp(self):
# Clean up here
print('starting cleanup')
def run(self):
# Do some stuff
return
try:
spam = MyThread('spam')
print('The thread is running')
finally:
spam.cleanUp()
If the problem you're trying to solve is that you don't want to add code to each of your run() methods to call your cleanup function, then I'd suggest making a custom subclass of Thread which does that for you. Something like this, perhaps:
class CleanupThread(Thread):
def cleanup(self):
# Override this method in your subclasses to do cleanup
pass
def run2(self):
# Override this method in your subclasses instead of run()
pass
def run(self):
# Do *not* override this in your subclasses. Override run2() instead.
try:
self.run2()
finally:
self.cleanup()
Of course, you're free to rename run2 to something that makes sense for you.
Python does not offer a built-in equivalent of this, if that's what you're looking for.
I use a with statement with the following class.
def __init__(self):
...
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
print "EXIT Shutting the SDK down"
ret = self.sdkobject.ShutDown()
self.error_check(ret)
This catches any error that occur when I am using the object of the class and safely shuts down the SDK that I am using. However, it catch problems when the class is still initializing. I have recently found the "del" function which neatly solves this problem. However, it can't be used in conjunction with the exit function (as the with statement evokes the exit and the del gets an exception). How can I set up a destructor using a with statemtent, which will catch failures even during initialization?
Exceptions in the __init__ need to be dealt with directly in that method:
class YourContextManager(object):
sdkobject = None
def __init__(self):
try:
self._create_sdk_object()
except Exception:
if self.sdkobject is not None:
self.sdkobject.ShutDown()
raise
def _create_sdk_object(self):
self.sdkobject = SomeSDKObject()
self.sdkobject.do_something_that_could_raise_an_exception()
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
print "EXIT Shutting the SDK down"
ret = self.sdkobject.ShutDown()
self.error_check(ret)
Note that the exception is re-raised; you want to give the consumer of the context manager an opportunity to handle the failure to create a context manager.
Create a separate shutdown function that gets called in the try/except block of the __init__ and wherever else you need a proper shutdown.
Catch the exception in __init__ and handle it. __del__ is unnecessary.
I have a signal_handler connected through a decorator, something like this very simple one:
#receiver(post_save, sender=User,
dispatch_uid='myfile.signal_handler_post_save_user')
def signal_handler_post_save_user(sender, *args, **kwargs):
# do stuff
What I want to do is to mock it with the mock library http://www.voidspace.org.uk/python/mock/ in a test, to check how many times django calls it. My code at the moment is something like:
def test_cache():
with mock.patch('myapp.myfile.signal_handler_post_save_user') as mocked_handler:
# do stuff that will call the post_save of User
self.assert_equal(mocked_handler.call_count, 1)
The problem here is that the original signal handler is called even if mocked, most likely because the #receiver decorator is storing a copy of the signal handler somewhere, so I'm mocking the wrong code.
So the question: how do I mock my signal handler to make my test work?
Note that if I change my signal handler to:
def _support_function(*args, **kwargs):
# do stuff
#receiver(post_save, sender=User,
dispatch_uid='myfile.signal_handler_post_save_user')
def signal_handler_post_save_user(sender, *args, **kwargs):
_support_function(*args, **kwargs)
and I mock _support_function instead, everything works as expected.
Possibly a better idea is to mock out the functionality inside the signal handler rather than the handler itself. Using the OP's code:
#receiver(post_save, sender=User, dispatch_uid='myfile.signal_handler_post_save_user')
def signal_handler_post_save_user(sender, *args, **kwargs):
do_stuff() # <-- mock this
def do_stuff():
... do stuff in here
Then mock do_stuff:
with mock.patch('myapp.myfile.do_stuff') as mocked_handler:
self.assert_equal(mocked_handler.call_count, 1)
So, I ended up with a kind-of solution: mocking a signal handler simply means to connect the mock itself to the signal, so this exactly is what I did:
def test_cache():
with mock.patch('myapp.myfile.signal_handler_post_save_user', autospec=True) as mocked_handler:
post_save.connect(mocked_handler, sender=User, dispatch_uid='test_cache_mocked_handler')
# do stuff that will call the post_save of User
self.assertEquals(mocked_handler.call_count, 1) # standard django
# self.assert_equal(mocked_handler.call_count, 1) # when using django-nose
Notice that autospec=True in mock.patch is required in order to make post_save.connect to correctly work on a MagicMock, otherwise django will raise some exceptions and the connection will fail.
You can mock a django signal by mocking the ModelSignal class at django.db.models.signals.py like this:
#patch("django.db.models.signals.ModelSignal.send")
def test_overwhelming(self, mocker_signal):
obj = Object()
That should do the trick. Note that this will mock ALL signals no matter which object you are using.
If by any chance you use the mocker library instead, it can be done like this:
from mocker import Mocker, ARGS, KWARGS
def test_overwhelming(self):
mocker = Mocker()
# mock the post save signal
msave = mocker.replace("django.db.models.signals")
msave.post_save.send(KWARGS)
mocker.count(0, None)
with mocker:
obj = Object()
It's more lines but it works pretty well too :)
take a look at mock_django . It has support for signals
https://github.com/dcramer/mock-django/blob/master/tests/mock_django/signals/tests.py
In django 1.9 you can mock all receivers with something like this
# replace actual receivers with mocks
mocked_receivers = []
for i, receiver in enumerate(your_signal.receivers):
mock_receiver = Mock()
your_signal.receivers[i] = (receiver[0], mock_receiver)
mocked_receivers.append(mock_receiver)
... # whatever your test does
# ensure that mocked receivers have been called as expected
for mocked_receiver in mocked_receivers:
assert mocked_receiver.call_count == 1
mocked_receiver.assert_called_with(*your_args, sender="your_sender", signal=your_signal, **your_kwargs)
This replaces all receivers with mocks, eg ones you've registered, ones pluggable apps have registered and ones that django itself has registered. Don't be suprised if you use this on post_save and things start breaking.
You may want to inspect the receiver to determine if you actually want to mock it.
There is a way to mock django signals with a small class.
You should keep in mind that this would only mock the function as a django signal handler and not the original function; for example, if a m2mchange trigers a call to a function that calls your handler directly, mock.call_count would not be incremented. You would need a separate mock to keep track of those calls.
Here is the class in question:
class LocalDjangoSignalsMock():
def __init__(self, to_mock):
"""
Replaces registered django signals with MagicMocks
:param to_mock: list of signal handlers to mock
"""
self.mocks = {handler:MagicMock() for handler in to_mock}
self.reverse_mocks = {magicmock:mocked
for mocked,magicmock in self.mocks.items()}
django_signals = [signals.post_save, signals.m2m_changed]
self.registered_receivers = [signal.receivers
for signal in django_signals]
def _apply_mocks(self):
for receivers in self.registered_receivers:
for receiver_index in xrange(len(receivers)):
handler = receivers[receiver_index]
handler_function = handler[1]()
if handler_function in self.mocks:
receivers[receiver_index] = (
handler[0], self.mocks[handler_function])
def _reverse_mocks(self):
for receivers in self.registered_receivers:
for receiver_index in xrange(len(receivers)):
handler = receivers[receiver_index]
handler_function = handler[1]
if not isinstance(handler_function, MagicMock):
continue
receivers[receiver_index] = (
handler[0], weakref.ref(self.reverse_mocks[handler_function]))
def __enter__(self):
self._apply_mocks()
return self.mocks
def __exit__(self, *args):
self._reverse_mocks()
Example usage
to_mock = [my_handler]
with LocalDjangoSignalsMock(to_mock) as mocks:
my_trigger()
for mocked in to_mock:
assert(mocks[mocked].call_count)
# 'function {0} was called {1}'.format(
# mocked, mocked.call_count)
As you mentioned,
mock.patch('myapp.myfile._support_function') is correct but mock.patch('myapp.myfile.signal_handler_post_save_user') is wrong.
I think the reason is:
When init you test, some file import the signal's realization python file, then #receive decorator create a new signal connection.
In the test, mock.patch('myapp.myfile._support_function') will create another signal connection, so the original signal handler is called even if mocked.
Try to disconnect the signal connection before mock.patch('myapp.myfile._support_function'), like
post_save.disconnect(signal_handler_post_save_user)
with mock.patch("review.signals. signal_handler_post_save_user", autospec=True) as handler:
#do stuff
I have this decorator taken directly from an example I found on the net:
class TimedOutExc(Exception):
pass
def timeout(timeout):
def decorate(f):
def handler(signum, frame):
raise TimedOutExc()
def new_f(*args, **kwargs):
old = signal.signal(signal.SIGALRM, handler)
signal.alarm(timeout)
try:
result = f(*args, **kwargs)
except TimedOutExc:
return None
finally:
signal.signal(signal.SIGALRM, old)
signal.alarm(0)
return result
new_f.func_name = f.func_name
return new_f
return decorate
It throws an exception if the f function times out.
Well, it works but when I use this decorator on a multiprocessing function and stops due to a time out, it doesn't terminate the processes involved in the computation. How can I do that?
I don't want to launch an exception and stop the program. Basically what I want is when f times out, have it return None and then terminate the processes involved.
While I agree with the main point of Aaron's answer, I would like to elaborate a bit.
The processes launched by multiprocessing must be stopped in the function to be decorated; I don't think that this can be done generally and simply from the decorator itself (the decorated function is the only entity that knows what calculations it launched).
Instead of having the decorated function catch SIGALARM, you can also catch your custom TimedOutExc exception–this might be more flexible. Your example would then become:
import signal
import functools
class TimedOutExc(Exception):
"""
Raised when a timeout happens
"""
def timeout(timeout):
"""
Return a decorator that raises a TimedOutExc exception
after timeout seconds, if the decorated function did not return.
"""
def decorate(f):
def handler(signum, frame):
raise TimedOutExc()
#functools.wraps(f) # Preserves the documentation, name, etc.
def new_f(*args, **kwargs):
old_handler = signal.signal(signal.SIGALRM, handler)
signal.alarm(timeout)
result = f(*args, **kwargs) # f() always returns, in this scheme
signal.signal(signal.SIGALRM, old_handler) # Old signal handler is restored
signal.alarm(0) # Alarm removed
return result
return new_f
return decorate
#timeout(10)
def function_that_takes_a_long_time():
try:
# ... long, parallel calculation ...
except TimedOutExc:
# ... Code that shuts down the processes ...
# ...
return None # Or exception raised, which means that the calculation is not complete
I doubt that can be done with a decorator: A decorator is a wrapper for a function; the function is a black box. There is no communication between the decorator and the function it wraps.
What you need to do is rewrite your function's code to use the SIGALRM handler to terminate any processes it has started.
I need to register an atexit function for use with a class (see Foo below for an example) that, unfortunately, I have no direct way of cleaning up via a method call: other code, that I don't have control over, calls Foo.start() and Foo.end() but sometimes doesn't call Foo.end() if it encounters an error, so I need to clean up myself.
I could use some advice on closures in this context:
class Foo:
def cleanup(self):
# do something here
def start(self):
def do_cleanup():
self.cleanup()
atexit.register(do_cleanup)
def end(self):
# cleanup is no longer necessary... how do we unregister?
Will the closure work properly, e.g. in do_cleanup, is the value of self bound correctly?
How can I unregister an atexit() routine?
Is there a better way to do this?
edit: this is Python 2.6.5
Make a registry a global registry and a function that calls a function in it, and remove them from there when necessary.
cleaners = set()
def _call_cleaners():
for cleaner in list(cleaners):
cleaner()
atexit.register(_call_cleaners)
class Foo(object):
def cleanup(self):
if self.cleaned:
raise RuntimeError("ALREADY CLEANED")
self.cleaned = True
def start(self):
self.cleaned = False
cleaners.add(self.cleanup)
def end(self):
self.cleanup()
cleaners.remove(self.cleanup)
I think the code is fine. There's no way to unregister, but you can set a boolean flag that would disable cleanup:
class Foo:
def __init__(self):
self.need_cleanup = True
def cleanup(self):
# do something here
print 'clean up'
def start(self):
def do_cleanup():
if self.need_cleanup:
self.cleanup()
atexit.register(do_cleanup)
def end(self):
# cleanup is no longer necessary... how do we unregister?
self.need_cleanup = False
Lastly, bear in mind that atexit handlers don't get called if "the program is killed by a signal not handled by Python, when a Python fatal internal error is detected, or when os._exit() is called."
self is bound correctly inside the callback to do_cleanup, but in fact if all you are doing is calling the method you might as well use the bound method directly.
You use atexit.unregister() to remove the callback, but there is a catch here as you must unregister the same function that you registered and since you used a nested function that means you have to store a reference to that function. If you follow my suggestion of using a bound method then you still have to save a reference to it:
class Foo:
def cleanup(self):
# do something here
def start(self):
self._cleanup = self.cleanup # Need to save the bound method for unregister
atexit.register(self._cleanup)
def end(self):
atexit.unregister(self._cleanup)
Note that it is still possible for your code to exit without calling ther atexit registered functions, for example if the process is aborted with ctrl+break on windows or killed with SIGABRT on linux.
Also as another answer suggests you could just use __del__ but that can be problematic for cleanup while a program is exiting as it may not be called until after other globals it needs to access have been deleted.
Edited to note that when I wrote this answer the question didn't specify Python 2.x. Oh well, I'll leave the answer here anyway in case it helps anyone else.
Since shanked deleted his posting, I'll speak in favor of __del__ again:
import atexit, weakref
class Handler:
def __init__(self, obj):
self.obj = weakref.ref(obj)
def cleanup(self):
if self.obj is not None:
obj = self.obj()
if obj is not None:
obj.cleanup()
class Foo:
def __init__(self):
self.start()
def cleanup(self):
print "cleanup"
self.cleanup_handler = None
def start(self):
self.cleanup_handler = Handler(self)
atexit.register(self.cleanup_handler.cleanup)
def end(self):
if self.cleanup_handler is None:
return
self.cleanup_handler.obj = None
self.cleanup()
def __del__(self):
self.end()
a1=Foo()
a1.end()
a1=Foo()
a2=Foo()
del a2
a3=Foo()
a3.m=a3
This supports the following cases:
objects where .end is called regularly; cleanup right away
objects that are released without .end being called; cleanup when the last
reference goes away
objects living in cycles; cleanup atexit
objects that are kept alive; cleanup atexit
Notice that it is important that the cleanup handler holds a weak reference
to the object, as it would otherwise keep the object alive.
Edit: Cycles involving Foo will not be garbage-collected, since Foo implements __del__. To allow for the cycle being deleted at garbage collection time, the cleanup must be taken out of the cycle.
class Cleanup:
cleaned = False
def cleanup(self):
if self.cleaned:
return
print "cleanup"
self.cleaned = True
def __del__(self):
self.cleanup()
class Foo:
def __init__(self):...
def start(self):
self.cleaner = Cleanup()
atexit.register(Handler(self).cleanup)
def cleanup(self):
self.cleaner.cleanup()
def end(self):
self.cleanup()
It's important that the Cleanup object has no references back to Foo.
Why don't you try it? It only took me a minute to check.
(Answer: Yes)
However, you can simplify it. The closure isn't needed.
class Foo:
def cleanup(self):
pass
def start(self):
atexit.register(self.cleanup)
And to not cleanup twice, just check in the cleanup method if a cleanup is needed or not before you clean up.