Does an application-wide exception handler make sense? - python

Long story short, I have a substantial Python application that, among other things, does outcalls to "losetup", "mount", etc. on Linux. Essentially consuming system resources that must be released when complete.
If my application crashes, I want to ensure these system resources are properly released.
Does it make sense to do something like the following?
def main():
# TODO: main application entry point
pass
def cleanup():
# TODO: release system resources here
pass
if __name__ == "__main__":
try:
main()
except:
cleanup()
raise
Is this something that is typically done? Is there a better way? Perhaps the destructor in a singleton class?

I like top-level exception handlers in general (regardless of language). They're a great place to cleanup resources that may not be immediately related to resources consumed inside the method that throws the exception.
It's also a fantastic place to log those exceptions if you have such a framework in place. Top-level handlers will catch those bizarre exceptions you didn't plan on and let you correct them in the future, otherwise, you may never know about them at all.
Just be careful that your top-level handler doesn't throw exceptions!

A destructor (as in a __del__ method) is a bad idea, as these are not guaranteed to be called. The atexit module is a safer approach, although these will still not fire if the Python interpreter crashes (rather than the Python application), or if os._exit() is used, or the process is killed aggressively, or the machine reboots. (Of course, the last item isn't an issue in your case.) If your process is crash-prone (it uses fickle third-party extension modules, for instance) you may want to do the cleanup in a simple parent process for more isolation.
If you aren't really worried, use the atexit module.

Application wide handler is fine. They are great for logging. Just make sure that the application wide one is durable and is unlikely to crash itself.

if you use classes, you should free the resources they allocate in their destructors instead, of course. Use the try: on entire application just if you want to free resources that aren't already liberated by your classes' destructors.
And instead of using a catch-all except:, you should use the following block:
try:
main()
finally:
cleanup()
That will ensure cleanup in a more pythonic way.

That seems like a reasonable approach, and more straightforward and reliable than a destructor on a singleton class. You might also look at the "atexit" module. (Pronounced "at exit", not "a tex it" or something like that. I confused that for a long while.)

Consider writing a context manager and using the with statement.

Related

Is with-exception locking behaviour sound intentional design?

I have a small server script mirroring IoT traffic, and handling several kinds of packages.
In there, I have stuff in queues, and pulling from them is arranged as follows:
def pull_from(service, ID):
with service.LOCK_A:
if not ID in service.queues:
service.queues[ID] = queue.Queue(35)
return service.queues[ID].get(timeout=2.5)
Here timeout expires in for example 2.5 seconds, and then raises queues.Empty, releasing the lock. The exception is catched downstream.
Previously I have avoided stuff like this. Is this considered "sound design" or is lock release through with-exception a sort of hack that should be avoided?
Yes, it’s fine. with statements always run __exit__ on exceptions, by design, like a try…finally.

RAII in python - How to manage the lifetime of chain of resources

I am bit of a python newbie, but I am implementing a benchmarking tool in python that will for example create several sets of resources which depend on each other. And when the program goes out of scope, I want to cleanup the resources in the correct order.
I'm from a C++ background, in C++ I know I can do this with RAII (constructors, destructors).
What is an equivalent pattern in pattern for this problem? Is there a way to do RAII in python or there is a better way to solve this problem?
You are probably looking for a context manager, which is an object that can be used in a with statement:
with context() as c:
do_something(c)
When the with statement is entered, the expression (in this case, context()) will be evaluated, and should return a context manager. __enter__() will be called on the context manager, and the result (which may or may not be the same object as the context manager) is assigned to the variable specified with as. No matter how control is exiting the with body, __exit__() will be called on the context manager, with arguments that specify whether an exception was thrown or not.
As an example: the builtin open() should be used in this way in order to close the opened file after interacting with it.
A new context manager type can easily be defined with contextlib.
For a more one-off solution, you can use try/finally: the finally block is executed after the try block, no matter how control exits the try block:
try:
do_something()
finally:
cleanup()

What is the proper way to handle (in python) IOError: [Errno 4] Interrupted system call, raised by multiprocessing.Queue.get

When I use multiprocessing.Queue.get I sometimes get an exception due to EINTR.
I know definitely that sometimes this happens for no good reason (I open another pane in a tmux buffr), and in such a case I would want to continue working and retry the operation.
I can imagine that in some other cases The error would be due to a good reason and I should stop running or fix some error.
How can I distinguish the two?
Thanks in advance
The EINTR error can be returned from many system calls when the application receives a signal while waiting for other input. Typically these signals can be quite benign and already handled by Python, but the underlying system call still ends up being interrupted. When doing C/C++ coding this is one reason why you can't entirely rely on functions like sleep(). The Python libraries sometimes handle this error code internally, but obviously in this case they're not.
You might be interested to read this thread which discusses this problem.
The general approach to EINTR is to simply handle the error and retry the operation again - this should be a safe thing to do with the get() method on the queue. Something like this could be used, passing the queue as a parameter and replacing the use of the get() method on the queue:
import errno
def my_queue_get(queue, block=True, timeout=None):
while True:
try:
return queue.get(block, timeout)
except IOError, e:
if e.errno != errno.EINTR:
raise
# Now replace instances of queue.get() with my_queue_get(queue), with other
# parameters passed as usual.
Typically you shouldn't need to worry about EINTR in a Python program unless you know you're waiting for a particular signal (for example SIGHUP) and you've installed a signal handler which sets a flag and relies on the main body of the code to pick up the flag. In this case, you might need to break out of your loop and check the signal flag if you receive EINTR.
However, if you're not using any signal handling then you should be able to just ignore EINTR and repeat your operation - if Python itself needs to do something with the signal it should have already dealt with it in the signal handler.
Old question, modern solution: as of Python 3.5, the wonderful PEP 475 - Retry system calls failing with EINTR has been implemented and solves the problem for you. Here is the abstract:
System call wrappers provided in the standard library should be retried automatically when they fail with EINTR , to relieve application code from the burden of doing so.
By system calls, we mean the functions exposed by the standard C library pertaining to I/O or handling of other system resources.
Basically, the system will catch and retry for you a piece of code that failed with EINTR so you don't have to handle it anymore. If you are targeting an older release, the while True loop still is the way to go. Note however that if you are using Python 3.3 or 3.4, you can catch the dedicated exception InterruptedError instead of catching IOError and checking for EINTR.

Python Twisted Deferred : clarification needed

I am hoping for some clarification on the best way to deal with handling "first" deferreds , ie not just adding callbacks and errbacks to existing Twisted methods that return a deferred, but the best way of creating those original deferreds.
As a concrete example, here are 2 variations of the same method :
it just counts the number of lines in some rather big text files, and is used as the starting point for a chain of deferreds.
Method 1:
This one does not feel so good, as the deferred is fired directly by the reactor.callLater method.
def get_line_count(self):
deferred = defer.Deferred()
def count_lines(result):
try:
print_file = file(self.print_file_path, "r")
self.line_count = sum(1 for line in print_file)
print_file.close()
return self.line_count
except Exception as inst:
raise InvalidFile()
deferred.addCallback(count_lines)
reactor.callLater(1, deferred.callback, None)
return deferred
Method 2:
slightly better , as the deferred is actually fired when the result is available
def get_line_count(self):
deferred = defer.Deferred()
def count_lines():
try:
print_file = file(self.print_file_path, "r")
self.line_count = sum(1 for line in print_file)
print_file.close()
deferred.callback(self.line_count)
except Exception as inst:
deferred.errback(InvalidFile())
reactor.callLater(1, count_lines)
return deferred
Note: You could also point out that both of these are actually synchronous, and potentially blocking methods, (and I perhaps could use "MaybeDeferred"?).
But well, that that is actually one of the aspects I get confused by.
For Method 2, if the count_lines method is very slow (counting the lines in some huge files etc), will it potentially "block" the whole Twisted app ?
I read quite a lot of documentation on how callbacks and errbacks and the reactor behave together (callbacks need to be executed quickly, or return deferreds themselves etc), but in this case , I just don't see and would really appreciate some pointers/examples etc
Are there some articles/clear explanations that deal with the best approach to creating these "first" deferreds? I have read through these excellent articles , and they have helped a lot with some of the basic understanding, but I still feel like I am missing a piece.
For blocking code, would this be this a typicall case for DeferToThread or reactor.spawnprocess ?
I read through a lot of questions like this one and this article, but I still am not 100% sure on how to deal with potentially blocking code, mostly when dealing with file i/o
Sorry if any of this seems too basic , but I really want to get the hang of using Twisted more thoroughly. (It has been a really powerful tool for all the more network-oriented aspects).
Thank you for your time!
Yes, you've got it right: you need threads or separate processes to avoid blocking the Twisted event loop. Using Deferreds wont magically make your code non-blocking. For your questions:
Yes, you would block the event loop if count_lines is very slow. Deferring it to a thread would solve this.
I used Twisteds documentation to learn how Deferreds work, but I guess you've already been through that. The article on database support was information since it clearly says that this library is built using threads. This is how you bridge the synchronous–asynchronous gap.
If the call is truly blocking, then you need to DeferToThread. Python itself is kind-of single threaded, meaning that only one thread can execute Python byte code at a time. However, if the thread you create will block on I/O anyway, then this model works fine: the thread will release the global interpreter lock and so let other Python threads run, including the main thread with the Twisted event loop.
It can also be the case that you can use non-blocking I/O in your code. This can be done with the select module, for example. In that case, you don't need a separate thread. Twisted uses this technique internally and you don't have to think of this if you do normal network I/O. But if you're doing something exotic, then it's good to know how things are built so that you can do the same.
I hope that makes things a bit clearer!

Python: time a method call and stop it if time is exceeded

I need to dynamically load code (comes as source), run it and get the results. The code that I load always includes a run method, which returns the needed results. Everything looks ridiculously easy, as usual in Python, since I can do
exec(source) #source includes run() definition
result = run(params)
#do stuff with result
The only problem is, the run() method in the dynamically generated code can potentially not terminate, so I need to only run it for up to x seconds. I could spawn a new thread for this, and specify a time for .join() method, but then I cannot easily get the result out of it (or can I). Performance is also an issue to consider, since all of this is happening in a long while loop
Any suggestions on how to proceed?
Edit: to clear things up per dcrosta's request: the loaded code is not untrusted, but generated automatically on the machine. The purpose for this is genetic programming.
The only "really good" solutions -- imposing essentially no overhead -- are going to be based on SIGALRM, either directly or through a nice abstraction layer; but as already remarked Windows does not support this. Threads are no use, not because it's hard to get results out (that would be trivial, with a Queue!), but because forcibly terminating a runaway thread in a nice cross-platform way is unfeasible.
This leaves high-overhead multiprocessing as the only viable cross-platform solution. You'll want a process pool to reduce process-spawning overhead (since presumably the need to kill a runaway function is only occasional, most of the time you'll be able to reuse an existing process by sending it new functions to execute). Again, Queue (the multiprocessing kind) makes getting results back easy (albeit with a modicum more caution than for the threading case, since in the multiprocessing case deadlocks are possible).
If you don't need to strictly serialize the executions of your functions, but rather can arrange your architecture to try two or more of them in parallel, AND are running on a multi-core machine (or multiple machines on a fast LAN), then suddenly multiprocessing becomes a high-performance solution, easily paying back for the spawning and IPC overhead and more, exactly because you can exploit as many processors (or nodes in a cluster) as you can use.
You could use the multiprocessing library to run the code in a separate process, and call .join() on the process to wait for it to finish, with the timeout parameter set to whatever you want. The library provides several ways of getting data back from another process - using a Value object (seen in the Shared Memory example on that page) is probably sufficient. You can use the terminate() call on the process if you really need to, though it's not recommended.
You could also use Stackless Python, as it allows for cooperative scheduling of microthreads. Here you can specify a maximum number of instructions to execute before returning. Setting up the routines and getting the return value out is a little more tricky though.
I could spawn a new thread for this, and specify a time for .join() method, but then I cannot easily get the result out of it
If the timeout expires, that means the method didn't finish, so there's no result to get. If you have incremental results, you can store them somewhere and read them out however you like (keeping threadsafety in mind).
Using SIGALRM-based systems is dicey, because it can deliver async signals at any time, even during an except or finally handler where you're not expecting one. (Other languages deal with this better, unfortunately.) For example:
try:
# code
finally:
cleanup1()
cleanup2()
cleanup3()
A signal passed up via SIGALRM might happen during cleanup2(), which would cause cleanup3() to never be executed. Python simply does not have a way to terminate a running thread in a way that's both uncooperative and safe.
You should just have the code check the timeout on its own.
import threading
from datetime import datetime, timedelta
local = threading.local()
class ExecutionTimeout(Exception): pass
def start(max_duration = timedelta(seconds=1)):
local.start_time = datetime.now()
local.max_duration = max_duration
def check():
if datetime.now() - local.start_time > local.max_duration:
raise ExecutionTimeout()
def do_work():
start()
while True:
check()
# do stuff here
return 10
try:
print do_work()
except ExecutionTimeout:
print "Timed out"
(Of course, this belongs in a module, so the code would actually look like "timeout.start()"; "timeout.check()".)
If you're generating code dynamically, then generate a timeout.check() call at the start of each loop.
Consider using the stopit package that could be useful in some cases you need timeout control. Its doc emphasizes the limitations.
https://pypi.python.org/pypi/stopit
a quick google for "python timeout" reveals a TimeoutFunction class
Executing untrusted code is dangerous, and should usually be avoided unless it's impossible to do so. I think you're right to be worried about the time of the run() method, but the run() method could do other things as well: delete all your files, open sockets and make network connections, begin cracking your password and email the result back to an attacker, etc.
Perhaps if you can give some more detail on what the dynamically loaded code does, the SO community can help suggest alternatives.

Categories