I've been doing amateur coding in Python for a while now and feel quite comfortable with it. Recently though I've been writing my first Daemon and am trying to come to terms with how my programs should flow.
With my past programs, exceptions could be handled by simply aborting the program, perhaps after some minor cleaning up. The only consideration I had to give to program structure was the effective handling of non-exception input. In effect, "Garbage In, Nothing Out".
In my Daemon, there is an outside loop that effectively never ends and a sleep statement within it to control the interval at which things happen. Processing of valid input data is easy but I'm struggling to understand the best practice for dealing with exceptions. Sometimes the exception may occur within several levels of nested functions and each needs to return something to its parent, which must, in turn, return something to its parent until control returns to the outer-most loop. Each function must be capable of handling any exception condition, not only for itself but also for all its subordinates.
I apologise for the vagueness of my question but I'm wondering if anyone could offer me some general pointers into how these exceptions should be handled. Should I be looking at spawning sub-processes that can be terminated without impact to the parent? A (remote) possibility is that I'm doing things correctly and actually do need all that nested handling. Another very real possibility is that I haven't got a clue what I'm talking about. :)
Steve
Exceptions are designed for the purpose of (potentially) not being caught immediately-- that's how they differ from when a function returns a value that means "error". Each exception can be caught at the level where you want to (and can) do something about it.
At a minimum, you could start by catching all exceptions at the main loop and logging a message. This is simple and ensures that your daemon won't die. At the main loop it's probably too late to fix most problems, so you can catch specific exceptions sooner. E.g. if a file has the wrong format, catch the exception in the routine that opens and tries to use the file, not deep in the parsing code where the problem is discovered; perhaps you can try another format. Basically if there's a place where you could recover from a particular error condition, catch it there and do so.
The answer will be "it depends".
If an exception occurs in some low-level function, it may be appropriate to catch it there if there is enough information available at this level to let the function complete successfully in spite of the exception. E.g. when reading triangles from an .stl file, the normal vector of the triangle it both explicitly given and implicitly given by the sequence of the three points that make up the triangle. So if the normal vector is given as (0,0,0), which is a 0-length vector and should trigger an exception in the constructor of a Normal vector class, that can be safely caught in the constructor of a Triangle class, because it can still be calculated by other means.
If there is not enough information available to handle an exception, it should trickle upwards to a level where it can be handled. E.g. if you are writing a module to read and interpret a file format, it should raise an exception if the file it was given doesn't match the file format. In this case it is probably the top level of the program using that module that should handle the exception and communicate with the user. (Or in case of a daemon, log the error and carry on.)
Related
I want to write a python script which ensures in any case that a database connection will be closed. (Please note that I'm not sure if I used the correct terms for everything described below.)
I could think of the following situations to end the script:
The script runs without any problem to its end.
The script is stopped by an raised exception.
The script is stopped while receiving a SIGTERM.
The script is stopped while receiving a SIGKILL.
What would be the best method to ensure that the database connection will be closed in any case. It would be nice if you could point out where the strengths and boundaries of the with and finally statements are.
As this question has a more theoretical interest no minimal code example is given. Please also node that it doesn't have to be a database connection I'm generally interested in the possibilities.
Thank you in advance.
Best,
Christian
One possibility is the atexit module. But it is cleaner to use try:/finally:, or even better make a context manager so that your connection object can be used in a with: statement.
By the way, another way an exit can happen is that the sys.exit() function is called. Internally, even sys.exit() works by raising an exception of type SystemExit, so with: statements and finally: handlers will still be called.
As the atexit documentation points out, none of these will be called if the program is exited with os._exit().
I have a python CGI script that takes several query strings as arguments.
The query strings are generated by another script, so there is little possibility to get illegal arguments,
unless some "naughty" user changes them intentionally.
A illegal argument may throw a exception (e.g. int() function get non-numerical inputs),
but does it make sense to write some codes to catch such rare errors? Any security risk or performance penalty if not caught?
I know the page may go ugly if exceptions are not nicely handled, but a naughty user deserves it, right?
Any unhandled exception causes program to terminate.
That means if your program is doing some thing and exception occurs it will shutdown in an unclean fashion without releasing resources.
Any ways CGI is obsolete use Django, Flask, Web2py or something.
In C++, in order for code to be robust in the presence of exceptions, it is often necessary to rely on the fact that a few simple operations are guaranteed never to fail (and hence never to throw an exception). Examples of these operations include assignment of integers and swapping of standard containers.
Are there any operations in Python which provide this no-fail guarantee?
Python is a higher-level language than C and C++. Anything can involve code execution behind the scenes, and no name is exempt from looking up its current, possibly overridden value. It might be possible to identify some operations that are guaranteed never to raise an exception, but I suspect that set of operations would be so small that it provides no benefit over the usual assumption that anything can raise an exception at any time.
And the identification of those operations would require limiting your Python environment. For example, you can assign a trace function which is invoked for every line of your Python program. With a suitably crafted trace function, even 1+1 could raise an exception. So do we assume that there is no trace function? What about redefining builtins?
Practically speaking, you need to adopt a different mindset for Python: exceptions happen, and you can't know ahead of time what they might be. As Mark Amery says in the comments, C++ needs to avoid memory leaks and uninitialized variables, which are not issues in Python.
When I use multiprocessing.Queue.get I sometimes get an exception due to EINTR.
I know definitely that sometimes this happens for no good reason (I open another pane in a tmux buffr), and in such a case I would want to continue working and retry the operation.
I can imagine that in some other cases The error would be due to a good reason and I should stop running or fix some error.
How can I distinguish the two?
Thanks in advance
The EINTR error can be returned from many system calls when the application receives a signal while waiting for other input. Typically these signals can be quite benign and already handled by Python, but the underlying system call still ends up being interrupted. When doing C/C++ coding this is one reason why you can't entirely rely on functions like sleep(). The Python libraries sometimes handle this error code internally, but obviously in this case they're not.
You might be interested to read this thread which discusses this problem.
The general approach to EINTR is to simply handle the error and retry the operation again - this should be a safe thing to do with the get() method on the queue. Something like this could be used, passing the queue as a parameter and replacing the use of the get() method on the queue:
import errno
def my_queue_get(queue, block=True, timeout=None):
while True:
try:
return queue.get(block, timeout)
except IOError, e:
if e.errno != errno.EINTR:
raise
# Now replace instances of queue.get() with my_queue_get(queue), with other
# parameters passed as usual.
Typically you shouldn't need to worry about EINTR in a Python program unless you know you're waiting for a particular signal (for example SIGHUP) and you've installed a signal handler which sets a flag and relies on the main body of the code to pick up the flag. In this case, you might need to break out of your loop and check the signal flag if you receive EINTR.
However, if you're not using any signal handling then you should be able to just ignore EINTR and repeat your operation - if Python itself needs to do something with the signal it should have already dealt with it in the signal handler.
Old question, modern solution: as of Python 3.5, the wonderful PEP 475 - Retry system calls failing with EINTR has been implemented and solves the problem for you. Here is the abstract:
System call wrappers provided in the standard library should be retried automatically when they fail with EINTR , to relieve application code from the burden of doing so.
By system calls, we mean the functions exposed by the standard C library pertaining to I/O or handling of other system resources.
Basically, the system will catch and retry for you a piece of code that failed with EINTR so you don't have to handle it anymore. If you are targeting an older release, the while True loop still is the way to go. Note however that if you are using Python 3.3 or 3.4, you can catch the dedicated exception InterruptedError instead of catching IOError and checking for EINTR.
I am writing a deferred task which is intended to construct a file in the blobstore for download. I am modelling the code on the example given in the docs:
http://code.google.com/appengine/articles/deferred.html
The idea is to structure the code so that if there is a DeadlineExceededError the handler can tidy up and kick off a new deferred task to continue later.
What I'd like to know is when exactly can this exception be thrown? Are there any operations which are guaranteed to be atomic and therefore will not be interrupted?
In the example (referenced above) they update a variable called start_key as they finish processing each record, but say the main loop was interrupted between the extending of the to_put and to_delete lists then the data would be wrong, as it would do miss a set of deletes.
If an exception can be raised at any point then it could be halfway through the batch_write, or between the db.put and clearing of the to_put list.
This is logically equivalent to a thread safety problem, to solve it one normally has guaranteed atomic operations and non-atomic operations.
How does this work?
Thanks
A DeadlineExceededError can be thrown literally any time at all. If there were a time when it couldn't be thrown, an abusive app could simply execute that code in a loop.
You can avoid this several ways:
Proactively check how long you've been executing for and stop at a good time before you hit the deadline.
Put the exception handler somewhere that it can store the state as of the last set of completed operations (eg, discarding anything since the last iteration of the outer loop in which the exception was thrown)
Use backends, which do not have deadlines.