Python function local variable scope during exceptions - python

Background: I'm doing COM programming of National Instruments' TestStand in Python. TestStand complains if objects aren't "released" properly (it pops up an "objects not released properly" debug dialog box). The way to release the TestStand COM objects in Python is to ensure all variables no longer contain the object—e.g. del() them, or set them to None. Or, as long as the variables are function local variables, the object is released as soon as the variable goes out of scope when the function ends.
Well, I've followed this rule in my program, and my program releases object properly as long as there are no exceptions. But if I get an exception, then I'm getting the "objects not released" message from TestStand. This seems to indicate that function local variables aren't going out of scope normally, when an exception happens.
Here is a simplified code example:
class TestObject(object):
def __init__(self, name):
self.name = name
print("Init " + self.name)
def __del__(self):
print("Del " + self.name)
def test_func(parameter):
local_variable = parameter
try:
pass
# raise Exception("Test exception")
finally:
pass
# local_variable = None
# parameter = None
outer_object = TestObject('outer_object')
try:
inner_object = TestObject('inner_object')
try:
test_func(inner_object)
finally:
inner_object = None
finally:
outer_object = None
When this runs as shown, it shows what I expect:
Init outer_object
Init inner_object
Del inner_object
Del outer_object
But if I uncomment the raise Exception... line, instead I get:
Init outer_object
Init inner_object
Del outer_object
Traceback (most recent call last):
...
Exception: Test exception
Del inner_object
The inner_object is deleted late due to the exception.
If I uncomment the lines that set both parameter and local_variable to None, then I get what I expect:
Init outer_object
Init inner_object
Del inner_object
Del outer_object
Traceback (most recent call last):
...
Exception: Test exception
So when exceptions happen in Python, what exactly happens to function local variables? Are they being saved somewhere so they don't go out of scope as normal? What is "the right way" to control this behaviour?

Your exception-handling is probably creating reference loops by keeping references to frames. As the docs put it:
Note Keeping references to frame
objects, as found in the first element
of the frame records these functions
return [[NB: "these functions" here refers to
some in module inspect, but the rest of the
paragraph applies more widely!]], can cause your program to
create reference cycles. Once a
reference cycle has been created, the
lifespan of all objects which can be
accessed from the objects which form
the cycle can become much longer even
if Python’s optional cycle detector is
enabled. If such cycles must be
created, it is important to ensure
they are explicitly broken to avoid
the delayed destruction of objects and
increased memory consumption which
occurs. Though the cycle detector will
catch these, destruction of the frames
(and local variables) can be made
deterministic by removing the cycle in
a finally clause. This is also
important if the cycle detector was
disabled when Python was compiled or
using gc.disable(). For example:
def handle_stackframe_without_leak():
frame = inspect.currentframe()
try:
# do something with the frame
finally:
del frame

A function's scope is for the entire function. Handle this in finally.

According to this answer for another question, it is possible to inspect local variables on the frame in an exception traceback via tb_frame.f_locals. So it does look as though the objects are kept "alive" for the duration of the exception handling.

Related

When does exception handling unexpectedly influence object lifetimes?

The Python reference on the data model notes that
catching an exception with a ‘try…except’ statement may keep objects alive.
It seems rather obvious that exceptions change control flow, potentially leading to different objects remaining referenced. Why is it explicitly mentioned? Is there a potential for memory leaks here?
An exception stores a traceback, which stores all child frames ("function calls") between raising and excepting. Frames reference all local names and their values, preventing the garbage collection of local names and values.
This means that an exception handler should promptly finish handling exceptions to allow child locals to be cleaned up. Still, a function cannot rely on its locals being collectable immediately after the function ends.
As a result, patterns such as RAII are not reliable to be prompt even on reference counted implementations. When prompt cleanup is required, objects should provide a means for explicit cleanup (for use in finally blocks) or preferably automatic cleanup (for use in with blocks).
Objects, values and types
[…]
Programs are strongly recommended to explicitly close such objects. The ‘try…finally’ statement and the ‘with’ statement provide convenient ways to do this.
One can observe this with a class that marks when it is garbage collected.
class Collectible:
def __init__(self, name):
self.name = name
def __del__(self, print=print):
print("Collecting", self.name)
def inner():
local_name = Collectible("inner local value")
raise RuntimeError("This is a drill")
def outer():
local_name = Collectible("outer local value")
inner()
try:
outer()
except RuntimeError as e:
print(f"handling a {type(e).__name__}: {e}")
On CPython, the output shows that the handler runs before the locals are collected:
handling a RuntimeError: This is a drill
Collecting inner local value
Collecting outer local value
Note that CPython uses reference counting, which already leads to quick cleanup as soon as possible. Other implementations may further and arbitrarily delay cleanup.
Well, AFAIK, if the exception references some object or another, those won't be collected until the exception itself is collected and, also, if the except statement happens to reference some object, that would also postergate its collection until after the block is over. I wonder if there are other, less obvious ways in which catching an exception could affect garbage collection.

python how to re-raise an exception which is already caught?

import sys
def worker(a):
try:
return 1 / a
except ZeroDivisionError:
return None
def master():
res = worker(0)
if not res:
print(sys.exc_info())
raise sys.exc_info()[0]
As code piece above, I have a bunch of functions like worker. They already have their own try-except block to handle exceptions. And then one master function will call each worker. Right now, sys.exc_info() return all None to 3 elements, how to re-raise the exceptions in the master function?
I am using Python 2.7
One update:
I have more than 1000 workers and some worker has very complex logic, they may deal multiple types of exceptions at same time. So my question is can I just raise those exceptions from master rather than edit works?
In your case, the exception in worker returns None. Once that happens, there's no getting the exception back. If your master function knows what the return values should be for each function (for example, ZeroDivisionError in worker reutrns None, you can manually reraise an exception.
If you're not able to edit the worker functions themselves, I don't think there's too much you can do. You might be able to use some of the solutions from this answer, if they work in code as well as on the console.
krflol's code above is kind of like how C handled exceptions - there was a global variable that, whenever an exception happened, was assigned a number which could later be cross-referenced to figure out what the exception was. That is also a possible solution.
If you're willing to edit the worker functions, though, then escalating an exception to the code that called the function is actually really simple:
try:
# some code
except:
# some response
raise
If you use a blank raise at the end of a catch block, it'll reraise the same exception it just caught. Alternatively, you can name the exception if you need to debug print, and do the same thing, or even raise a different exception.
except Exception as e:
# some code
raise e
What you're trying to do won't work. Once you handle an exception (without re-raising it), the exception, and the accompanying state, is cleared, so there's no way to access it. If you want the exception to stay alive, you have to either not handle it, or keep it alive manually.
This isn't that easy to find in the docs (the underlying implementation details about CPython are a bit easier, but ideally we want to know what Python the language defines), but it's there, buried in the except reference:
… This means the exception must be assigned to a different name to be able to refer to it after the except clause. Exceptions are cleared because with the traceback attached to them, they form a reference cycle with the stack frame, keeping all locals in that frame alive until the next garbage collection occurs.
Before an except clause’s suite is executed, details about the exception are stored in the sys module and can be accessed via sys.exc_info(). sys.exc_info() returns a 3-tuple consisting of the exception class, the exception instance and a traceback object (see section The standard type hierarchy) identifying the point in the program where the exception occurred. sys.exc_info() values are restored to their previous values (before the call) when returning from a function that handled an exception.
Also, this is really the point of exception handlers: when a function handles an exception, to the world outside that function, it looks like no exception happened. This is even more important in Python than in many other languages, because Python uses exceptions so promiscuously—every for loop, every hasattr call, etc. is raising and handling an exception, and you don't want to see them.
So, the simplest way to do this is to just change the workers to not handle the exceptions (or to log and then re-raise them, or whatever), and let exception handling work the way it's meant to.
There are a few cases where you can't do this. For example, if your actual code is running the workers in background threads, the caller won't see the exception. In that case, you need to pass it back manually. For a simple example, let's change the API of your worker functions to return a value and an exception:
def worker(a):
try:
return 1 / a, None
except ZeroDivisionError as e:
return None, e
def master():
res, e = worker(0)
if e:
print(e)
raise e
Obviously you can extend this farther to return the whole exc_info triple, or whatever else you want; I'm just keeping this as simple as possible for the example.
If you look inside the covers of things like concurrent.futures, this is how they handle passing exceptions from tasks running on a thread or process pool back to the parent (e.g., when you wait on a Future).
If you can't modify the workers, you're basically out of luck. Sure, you could write some horrible code to patch the workers at runtime (by using inspect to get their source and then using ast to parse, transform, and re-compile it, or by diving right down into the bytecode), but this is almost never going to be a good idea for any kind of production code.
Not tested, but I suspect you could do something like this. Depending on the scope of the variable you'd have to change it, but I think you'll get the idea
try:
something
except Exception as e:
variable_to_make_exception = e
.....later on use variable
an example of using this way of handling errors:
errors = {}
try:
print(foo)
except Exception as e:
errors['foo'] = e
try:
print(bar)
except Exception as e:
errors['bar'] = e
print(errors)
raise errors['foo']
output..
{'foo': NameError("name 'foo' is not defined",), 'bar': NameError("name 'bar' is not defined",)}
Traceback (most recent call last):
File "<input>", line 13, in <module>
File "<input>", line 3, in <module>
NameError: name 'foo' is not defined

Using try exception/catch in the method definition or in the calling method?

Which of the following snippet codes is common?
#1:
def foo():
try:
pass # Some process
except Exception as e:
print(e)
foo()
#2:
def foo():
pass # Some process
try:
foo()
except Exception as e:
print(e)
It depends on what foo does, and the type of Exception, i'd say.
Should the caller handle it or should the method?
For instance, consider the following example:
def try_get_value(registry, key):
try:
return registry[key]
except KeyError:
return None
This function will attempt to fetch a value from a dictionary using its key. If the value is not there, it should return None.
The method should handle KeyError, because it needs to return None when this happens, so as to comply with its expected behavior. (It's the method's responsability to catch this error)
But think of other exception types, such as TypeError (e.g., if the registry is not a dict).
Why should our method handle that? That's the caller mess-up. He should handle that, and he should worry about that.
Besides, what can our method do if we get such Exception? There's no way we can handle that from this scope.
try_get_value has one simple task: to get a value from the registry (a default one if there is none). It's not responsible for the caller breaking the rules.
So we don't catch TypeError because it's not our responsability.
Hence, the caller's code may look like something like this:
try:
value = try_get_value(reg, 'some_key')
# Handle value
except TypeError:
# reg is not a dict, do something about it...
P.S.: There may be times when our foo method needs to do some cleanup if there is an unexpected exit (e.g. it has allocated some resources which would leak if not closed).
In this case, foo should catch the exceptions, just so it can fix its state appropriately, but should then raise them back again to the caller.
I think the first part is cleaner and more elegant. Also more logical because as an implementer of the function, you want to handle all exceptions that it might throw rather than leave it to the client or caller. Even if you'll be the only one using the method, you still want to handle exceptions inside the function as in the future you may not remember what exception it is throwing.

Multiprocessing, what does pool.ready do?

Suppose I have a pool with a few processes inside of a class that I use to do some processing, like this:
class MyClass:
def __init_(self):
self.pool = Pool(processes = NUM_PROCESSES)
self.pop = []
self.finished = []
def gen_pop(self):
self.pop = [ self.pool.apply_async(Item.test, (Item(),)) for _ in range(NUM_PROCESSES) ]
while (not self.check()):
continue
# Do some other stuff
def check(self):
self.finished = filter(lambda t: self.pop[t].ready(), range(NUM_PROCESSES))
new_pop = []
for f in self.finished:
new_pop.append(self.pop[f].get(timeout = 1))
self.pop[f] = None
# Do some other stuff
When I run this code I get a cPickle.PicklingError which states that a <type 'function'> can't be pickled. What this tells me is that one of the apply_async functions has not returned yet so I am attempting to append a running function to another list. But this shouldn't be happening because all running calls should have been filtered out using the ready() function.
On a related note, the actual nature of the Item class is unimportant but what is important is that at the top of my Item.test function I have a print statement which is supposed to fire for debugging purposes. However, that does not occur. This tells me that that the function has been initiated but has not actually started execution.
So then, it appears that ready() does not actually tell me whether or not a call has finished execution or not. What exactly does ready() do and how should I edit my code so that I can filter out the processes that are still running?
Multiprocessing uses pickle module internally to pass data between processes,
so your data must be picklable. See the list of what is considered picklable, object method is not in that list.
To solve this quickly just use a wrapper function around the method:
def wrap_item_test(item):
item.test()
class MyClass:
def gen_pop(self):
self.pop = [ self.pool.apply_async(wrap_item_test, (Item(),)) for _ in range(NUM_PROCESSES) ]
while (not self.check()):
continue
To answer the question you asked, .ready() is really telling you whether .get() may block: if .ready() returns True, .get() will not block, but if .ready() returns False, .get() may block (or it may not: quite possible the async call will complete before you get around to calling .get()).
So, e.g., the timeout = 1 in your .get() serves no purpose: since you only call .get() if .ready() returned True, you already know for a fact that .get() won't block.
But .get() not blocking does not imply the async call was successful, or even that a worker process even started working on an async call: as the docs say,
If the remote call raised an exception then that exception will be reraised by get().
That is, e.g., if the async call couldn't be performed at all, .ready() will return True and .get() will (re)raise the exception that prevented the attempt from working.
That appears to be what's happening in your case, although we have to guess because you didn't post runnable code, and didn't include the traceback.
Note that if what you really want to know is whether the async call completed normally, after already getting True back from .ready(), then .successful() is the method to call.
It's pretty clear that, whatever Item.test may be, it's flatly impossible to pass it as a callable to .apply_async(), due to pickle restrictions. That explains why Item.test never prints anything (it's never actually called!), why .ready() returns True (the .apply_async() call failed), and why .get() raises an exception (because .apply_async() encountered an exception while trying to pickle one of its arguments - probably Item.test).

Why can't I pickle an error's Traceback in Python?

I've since found a work around, but still want to know the answer.
The traceback holds references to the stack frames of each function/method that was called on the current thread, from the topmost-frame on down to the point where the error was raised. Each stack frame also holds references to the local and global variables in effect at the time each function in the stack was called.
Since there is no way for pickle to know what to serialize and what to ignore, if you were able to pickle a traceback you'd end up pickling a moving snapshot of the entire application state: as pickle runs, other threads may be modifying the values of shared variables.
One solution is to create a picklable object to walk the traceback and extract only the information you need to save.
You can use tblib
try:
1 / 0
except Exception as e:
raise Exception("foo") from e
except Exception as e:
s = pickle.dumps(e)
raise pickle.loads(s)
I guess you are interested in saving the complete call context (traceback + globals + locals of each frame).
That would be very useful to determine a difference of behavior of the same function in two different call contexts, or to build your own advanced tools to process, show or compare those tracebacks.
The problem is that pickl doesn't know how to serialize all type of objects that could be in the locals or globals.
I guess you can build your own object and save it, filtering out all those objects that are not picklabe. This code can serve as basis:
import sys, traceback
def print_exc_plus():
"""
Print the usual traceback information, followed by a listing of all the
local variables in each frame.
"""
tb = sys.exc_info()[2]
while 1:
if not tb.tb_next:
break
tb = tb.tb_next
stack = []
f = tb.tb_frame
while f:
stack.append(f)
f = f.f_back
stack.reverse()
traceback.print_exc()
print "Locals by frame, innermost last"
for frame in stack:
print
print "Frame %s in %s at line %s" % (frame.f_code.co_name,
frame.f_code.co_filename,
frame.f_lineno)
for key, value in frame.f_locals.items():
print "\t%20s = " % key,
#We have to be careful not to cause a new error in our error
#printer! Calling str() on an unknown object could cause an
#error we don't want.
try:
print value
except:
print "<ERROR WHILE PRINTING VALUE>"
but instead of printing the objects you can add them to a list with your own pickable representation ( a json or yml format might be better).
Maybe you want to load all this call context in order to reproduce the same situation for your function without run the complicated workflow that generate it. I don't know if this can be done (because of memory references), but in that case you would need to de-serialize it from your format.

Categories