RAII in Python: What's the point of __del__? - python

At first glance, it seems like Python's __del__ special method offers much the same advantages a destructor has in C++. But according to the Python documentation (https://docs.python.org/3.4/reference/datamodel.html), there is no guarantee that your object's __del__ method ever gets called at all!
It is not guaranteed that __del__() methods are called for objects that still exist when the interpreter exits.
So in other words, the method is useless! Isn't it? A hook function that may or may not get called really doesn't do much good, so __del__ offers nothing with regard to RAII. If I have some essential cleanup, I don't need it to run some of the time, oh, when ever the GC feels like it really, I need it to run reliably, deterministically and 100% of the time.
I know that Python provides context managers, which are far more useful for that task, but why was __del__ kept around at all? What's the point?

__del__ is a finalizer. It is not a destructor. Finalizers and destructors are entirely different animals.
Destructors are called reliably, and only exist in languages with deterministic memory management (such as C++). Python's context managers (the with statement) can achieve similar effects in certain circumstances. These are reliable because the lifespan of an object is precisely fixed; in C++, objects die when they are explicitly deleted or when some scope is exited (or when a smart pointer deletes them in response to its own destruction). And that's when destructors run.
Finalizers are not called reliably. The only valid use of a finalizer is as an emergency safety net (NB: this article is written from a .NET perspective, but the concepts translate reasonably well). For instance, the file objects returned by open() automatically close themselves when finalized. But you're still supposed to close them yourself (e.g. using the with statement). This is because the objects are destroyed dynamically by the garbage collector, which may or may not run right away, and with generational garbage collection, it may or may not collect some objects in any given pass. Since nobody knows what kinds of optimizations we might invent in the future, it's safest to assume that you just can't know when the garbage collector will get around to collecting your objects. That means you cannot rely on finalizers.
In the specific case of CPython, you get slightly stronger guarantees, thanks to the use of reference counting (which is far simpler and more predictable than garbage collection). If you can ensure that you never create a reference cycle involving a given object, that object's finalizer will be called at a predictable point (when the last reference dies). This is only true of CPython, the reference implementation, and not of PyPy, IronPython, Jython, or any other implementations.

Because __del__ does get called. It's just that it's unclear when it will, because in CPython if you have circular references, the refcount mechanism can't take care of the object reclamation (and thus its finalization via __del__) and must delegate it to the garbage collector.
The garbage collector then has a problem: he cannot know in which order to break the circular references, because this may trigger additional problems (e.g. frees the memory that is going to be needed in the finalization of another object that is part of the collected loop, triggering a segfault).
The point you stress is because the interpreter may exit for reasons that prevents it to perform the cleanup (e.g. it segfaults, or some C module impolitely calls exit() ).
There's PEP 442 for safe object finalization that has been finalized in 3.4. I suggest you take a look at it.
https://www.python.org/dev/peps/pep-0442/

Related

Proper finalization in Python

I have a bunch of instances, each having a unique tempfile for its use (save data from memory to disk and retrieve them later).
I want to be sure that at the end of the day, all these files are removed. However, I want to leave a room for a fine-grained control of their deletion. That is, some files may be removed earlier, if needed (e.g. they are too big and not important any more).
What is the best / recommended way to achieve this?
May thoughts on that
The try-finalize blocks or with statements are not an option, as we have many files, whose lifetime may overlap each other. Also, it hardly admits the option of finer control.
From what I have read, __del__ is also not a feasible option, as it is not even guaranteed that it will eventually run (although, it is not entirely clear to me, what are the "risky" cases). Also (if it is still the case), the libraries may not be available when __del__ runs.
tempfile library seems promising. However, the file is gone after just closing it, which is definitely a bummer, as I want them to be closed (when they perform no operation) to limit the number of open files.
The library promises that the file "will be destroyed as soon as it is closed (including an implicit close when the object is garbage collected)."
How do they achieve the implicit close? E.g. in C# I would use a (reliable) finalizer, which __del__ is not.
atexit library seems to be the best candidate, which can work as a reliable finalizer instead of __del__ to implement safe disposable pattern. The only problem, compared to object finalizers, is that it runs truly at-exit, which is rather inconvenient (what if the object eligible to be garbage-collected earlier?).
Here, the question still stands. How the library achieves that the methods always run? (Except in a really unexpected cases with which is hard to do anything)
In ideal case, it seems that a combination of __del__ and atexit library may perform best. That is, the clean-up is both at __del__ and the method registered in atexit, while repeated clean-up would be forbidden. If __del__ was called, the registered will be removed.
The only (yet crucial) problem is that __del__ won't run if a method is registered at atexit, because a reference to the object exists forever.
Thus, any suggestion, advice, useful link and so on is welcomed.
I suggest considering weakref built-in module for this task, more specifically weakref.finalize simple example:
import weakref
class MyClass:
pass
def clean_up(*args):
print('clean_up', args)
my_obj = MyClass()
weakref.finalize(my_obj, clean_up, 'arg1', 'arg2', 'arg3')
del my_obj # optional
when run it will output
clean_up ('arg1', 'arg2', 'arg3')
Note that clean_up will be executed even without del-ing of my_obj (you might delete last line of code and behavior will not change). clean_up is called after all strong references to my_obj are gone or at end (like using atexit module).

Where is Python's shutdown procedure setting module globals to None documented?

CPython has a strange behaviour where it sets modules to None during shutdown. This screws up error logging during shutdown of some multithreading code I've written.
I can't find any documentation of this behaviour. It's mentioned in passing in PEP 432:
[...] significantly reducing the number of modules that will experience the "module globals set to None" behaviour that is used to deliberate break cycles and attempt to releases more external resources cleanly.
There are SO questions about this behaviour and the C API documentation mentions shutdown behaviour for embedded interpreters.
I've also found a related thread on python-dev and a related CPython bug:
This patch does not change the behavior of module
objects clearing their globals dictionary as soon as
they are deallocated.
Where is this behaviour documented? Is it Python 2 specific?
The behaviour is not well documented, and is present in all versions of Python from about 1.5-ish until Python 3.4:
As part of this change, module globals are no longer forcibly set to None during interpreter shutdown in most cases, instead relying on the normal operation of the cyclic garbage collector.
The only documentation for the behaviour is the moduleobject.c source code:
/* To make the execution order of destructors for global
objects a bit more predictable, we first zap all objects
whose name starts with a single underscore, before we clear
the entire dictionary. We zap them by replacing them with
None, rather than deleting them from the dictionary, to
avoid rehashing the dictionary (to some extent). */
Note that setting the values to None is an optimisation; the alternative would be to delete names from the mapping, which would lead to different errors (NameError exceptions rather than AttributeErrors when trying to use globals from a __del__ handler).
As you found out on the mailinglist, the behaviour predates the cyclic garbage collector; it was added in 1998, while the cyclic garbage collector was added in 2000. Since function objects always reference the module __dict__ all function objects in a module involve circular references, which is why the __dict__ needed clearing before GC came into play.
It was kept in place even when cyclic GC was added, because there might be objects with __del__ methods involved in cycles. These aren't otherwise garbage-collectable, and cleaning out the module dictionary would at least remove the module __dict__ from such cycles. Not doing that would keep all referenced globals of that module alive.
The changes made for PEP 442 now make it possible for the garbage collector to clear cyclic references with objects that provide a __del__ finalizer, removing the need to clear the module __dict__ for most cases. The code is still there but this is only triggered if the __dict__ attribute is still alive even after moving the contents of sys.modules to weak references and starting a GC collection run when the interpreter is shutting down; the module finalizer simply decrements their reference count.
There is a small amount of related documentation at the bottom of the threading docs:
Secondly, all import attempts must be completed before the interpreter starts shutting itself down. [..] Failure to abide by this restriction will lead to intermittent exceptions and crashes during interpreter shutdown (as the late imports attempt to access machinery which is no longer in a valid state).

Garbage collector and problems with the __del__ finalizer

Surfing on the internet (here) I found that there are some problems to collect objects with __del__ method for the garbage collector.
My doubt is simple: why?
According to the documentation:
Objects that have __del__() methods and are part of a reference cycle cause the entire reference cycle to be uncollectable, including objects not necessarily in the cycle but reachable only from it. Python doesn’t collect such cycles automatically because, in general, it isn’t possible for Python to guess a safe order in which to run the __del__() methods.
Why is the __del__ method so problematic? What's the difference between an object that implements it and one which doesn't? It only destroys an instance.
__del__ doesn't destroy an instance, it is automatically destroyed by the python runtime once its reference count reaches zero. __del__ allows you to hook into that process and perform additional actions, such as freeing external resources associated with the object.
The danger is that the additional action may even resurrect the object - for example, by storing it into a global container. In that case, the destruction is effectively cancelled (until the next time the object's reference count drops to zero). It is this scenario that causes the mere presence of __del__ to exclude the object from those governed by the cycle breaker (also known as the garbage collector). If the collector invoked __del__ on all objects in the cycle, and one of them decided to resurrect the object, this would need to resurrect the whole cycle - which is impossible since __del__ method of other cycle members have already been invoked, possibly causing permanent damage to their objects (e.g. by freeing external resources, as mentioned above).
If you only need to be notified of object's destruction, use weakref.ref. If your object is associated with external resources that need freeing, implement a close method and/or a context manager interface. There is almost never a legitimate reason to use __del__.

Finding where a python object is hiding

I have a problem in which there is a python object that is hiding somewhere. The object is a wrapper around a C library and I need to call the deallocation routine at exit, otherwise the (behind the scenes) thread will hang (it's a cython object, so the deallocation is put in the __dealloc__ method).
The problem is I can't for the life of me work out where the object is hiding. I haven't intentionally introduced any global state. Is there some way to work out where an object is lingering? Could it just be a lingering object cycle, so gc should pick it up? That said, I'd really like to work out the cause of the problem if possible.
Edit: I solved the problem, which was down to pyglet event handlers not being cleanly removed. They were in the __del__ method, but the object wasn't being deleted because the event dispatcher had hold of an object method. This is fine logically, but it seems odd to me that the object is never deleted, even at exit. Does anyone know why the __del__ is not called at interpreter exit? Actually, this question has been asked - though the answers aren't brilliant.
Anyway, the basic question still stands - how do I reliably find these lingering references?
One possible place is gc.garbage It is a list of objects that have been found unreachable, but cannot be deleted because they include __del__ methods in a cycle.
In Python previous to 3.4, if you have a cycle with several __del__ methods, the interpreter doesn't know in which way they should be executed, as they could have mutual references. So instead it doesn't execute any, and moves the objects to this list.
If you find your object there, the documentation recommends doing del gc.garbage[:].
The solution to avoid this in the first place is to use weakrefs where possible to avoid cycles.

Why is the destructor called when the CPython garbage collector is disabled?

I'm trying to understand the internals of the CPython garbage collector, specifically when the destructor is called. So far, the behavior is intuitive, but the following case trips me up:
Disable the GC.
Create an object, then remove a reference to it.
The object is destroyed and the _____del_____ method is called.
I thought this would only happen if the garbage collector was enabled. Can someone explain why this happens? Is there a way to defer calling the destructor?
import gc
import unittest
_destroyed = False
class MyClass(object):
def __del__(self):
global _destroyed
_destroyed = True
class GarbageCollectionTest(unittest.TestCase):
def testExplicitGarbageCollection(self):
gc.disable()
ref = MyClass()
ref = None
# The next test fails.
# The object is automatically destroyed even with the collector turned off.
self.assertFalse(_destroyed)
gc.collect()
self.assertTrue(_destroyed)
if __name__=='__main__':
unittest.main()
Disclaimer: this code is not meant for production -- I've already noted that this is very implementation-specific and does not work on Jython.
Python has both reference counting garbage collection and cyclic garbage collection, and it's the latter that the gc module controls. Reference counting can't be disabled, and hence still happens when the cyclic garbage collector is switched off.
Since there are no references left to your object after ref = None, its __del__ method is called as a result of its reference count going to zero.
There's a clue in the documentation: "Since the collector supplements the reference counting already used in Python..." (my emphasis).
You can stop the first assertion from firing by making the object refer to itself, so that its reference count doesn't go to zero, for instance by giving it this constructor:
def __init__(self):
self.myself = self
But if you do that, the second assertion will fire. That's because garbage cycles with __del__ methods don't get collected - see the documentation for gc.garbage.
The docs here (original link was to a documentation section which up to Python 3.5 was here, and was later relocated) explain how what's called "the optional garbage collector" is actually a collector of cyclic garbage (the kind that reference counting wouldn't catch) (see also here). Reference counting is explained here, with a nod to its interplay with the cyclic gc:
While Python uses the traditional
reference counting implementation, it
also offers a cycle detector that
works to detect reference cycles. This
allows applications to not worry about
creating direct or indirect circular
references; these are the weakness of
garbage collection implemented using
only reference counting. Reference
cycles consist of objects which
contain (possibly indirect) references
to themselves, so that each object in
the cycle has a reference count which
is non-zero. Typical reference
counting implementations are not able
to reclaim the memory belonging to any
objects in a reference cycle, or
referenced from the objects in the
cycle, even though there are no
further references to the cycle
itself.
Depending on your definition of garbage collector, CPython has two garbage collectors, the reference counting one, and the other one.
The reference counter is always working, and cannot be turned off, as it's quite a fast and lightweight one that does not sigificantly affect the run time of the system.
The other one (some varient of mark and sweep, I think), gets run every so often, and can be disabled. This is because it requires the interpreter to be paused while it is running, and this can happen at the wrong moment, and consume quite a lot of CPU time.
This ability to disable it is there for those time when you expect to be doing something that's time critical, and the lack of this GC won't cause you any problems.

Categories