Related
class Package:
def __init__(self):
self.files = []
# ...
def __del__(self):
for file in self.files:
os.unlink(file)
__del__(self) above fails with an AttributeError exception. I understand Python doesn't guarantee the existence of "global variables" (member data in this context?) when __del__() is invoked. If that is the case and this is the reason for the exception, how do I make sure the object destructs properly?
I'd recommend using Python's with statement for managing resources that need to be cleaned up. The problem with using an explicit close() statement is that you have to worry about people forgetting to call it at all or forgetting to place it in a finally block to prevent a resource leak when an exception occurs.
To use the with statement, create a class with the following methods:
def __enter__(self)
def __exit__(self, exc_type, exc_value, traceback)
In your example above, you'd use
class Package:
def __init__(self):
self.files = []
def __enter__(self):
return self
# ...
def __exit__(self, exc_type, exc_value, traceback):
for file in self.files:
os.unlink(file)
Then, when someone wanted to use your class, they'd do the following:
with Package() as package_obj:
# use package_obj
The variable package_obj will be an instance of type Package (it's the value returned by the __enter__ method). Its __exit__ method will automatically be called, regardless of whether or not an exception occurs.
You could even take this approach a step further. In the example above, someone could still instantiate Package using its constructor without using the with clause. You don't want that to happen. You can fix this by creating a PackageResource class that defines the __enter__ and __exit__ methods. Then, the Package class would be defined strictly inside the __enter__ method and returned. That way, the caller never could instantiate the Package class without using a with statement:
class PackageResource:
def __enter__(self):
class Package:
...
self.package_obj = Package()
return self.package_obj
def __exit__(self, exc_type, exc_value, traceback):
self.package_obj.cleanup()
You'd use this as follows:
with PackageResource() as package_obj:
# use package_obj
The standard way is to use atexit.register:
# package.py
import atexit
import os
class Package:
def __init__(self):
self.files = []
atexit.register(self.cleanup)
def cleanup(self):
print("Running cleanup...")
for file in self.files:
print("Unlinking file: {}".format(file))
# os.unlink(file)
But you should keep in mind that this will persist all created instances of Package until Python is terminated.
Demo using the code above saved as package.py:
$ python
>>> from package import *
>>> p = Package()
>>> q = Package()
>>> q.files = ['a', 'b', 'c']
>>> quit()
Running cleanup...
Unlinking file: a
Unlinking file: b
Unlinking file: c
Running cleanup...
A better alternative is to use weakref.finalize. See the examples at Finalizer Objects and Comparing finalizers with __del__() methods.
As an appendix to Clint's answer, you can simplify PackageResource using contextlib.contextmanager:
#contextlib.contextmanager
def packageResource():
class Package:
...
package = Package()
yield package
package.cleanup()
Alternatively, though probably not as Pythonic, you can override Package.__new__:
class Package(object):
def __new__(cls, *args, **kwargs):
#contextlib.contextmanager
def packageResource():
# adapt arguments if superclass takes some!
package = super(Package, cls).__new__(cls)
package.__init__(*args, **kwargs)
yield package
package.cleanup()
def __init__(self, *args, **kwargs):
...
and simply use with Package(...) as package.
To get things shorter, name your cleanup function close and use contextlib.closing, in which case you can either use the unmodified Package class via with contextlib.closing(Package(...)) or override its __new__ to the simpler
class Package(object):
def __new__(cls, *args, **kwargs):
package = super(Package, cls).__new__(cls)
package.__init__(*args, **kwargs)
return contextlib.closing(package)
And this constructor is inherited, so you can simply inherit, e.g.
class SubPackage(Package):
def close(self):
pass
Here is a minimal working skeleton:
class SkeletonFixture:
def __init__(self):
pass
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
pass
def method(self):
pass
with SkeletonFixture() as fixture:
fixture.method()
Important: return self
If you're like me, and overlook the return self part (of Clint Miller's correct answer), you will be staring at this nonsense:
Traceback (most recent call last):
File "tests/simplestpossible.py", line 17, in <module>
fixture.method()
AttributeError: 'NoneType' object has no attribute 'method'
Hope it helps the next person.
I don't think that it's possible for instance members to be removed before __del__ is called. My guess would be that the reason for your particular AttributeError is somewhere else (maybe you mistakenly remove self.file elsewhere).
However, as the others pointed out, you should avoid using __del__. The main reason for this is that instances with __del__ will not be garbage collected (they will only be freed when their refcount reaches 0). Therefore, if your instances are involved in circular references, they will live in memory for as long as the application run. (I may be mistaken about all this though, I'd have to read the gc docs again, but I'm rather sure it works like this).
I think the problem could be in __init__ if there is more code than shown?
__del__ will be called even when __init__ has not been executed properly or threw an exception.
Source
Just wrap your destructor with a try/except statement and it will not throw an exception if your globals are already disposed of.
Edit
Try this:
from weakref import proxy
class MyList(list): pass
class Package:
def __init__(self):
self.__del__.im_func.files = MyList([1,2,3,4])
self.files = proxy(self.__del__.im_func.files)
def __del__(self):
print self.__del__.im_func.files
It will stuff the file list in the del function that is guaranteed to exist at the time of call. The weakref proxy is to prevent Python, or yourself from deleting the self.files variable somehow (if it is deleted, then it will not affect the original file list). If it is not the case that this is being deleted even though there are more references to the variable, then you can remove the proxy encapsulation.
It seems that the idiomatic way to do this is to provide a close() method (or similar), and call it explicitely.
A good idea is to combine both approaches.
To implement a context manager for explicit life-cycle handling. As well as handle cleanup in case the user forgets it or it is not convenient to use a with statement. This is best done by weakref.finalize.
This is how many libraries actually do it. And depending on the severity, you could issue a warning.
It is guaranteed to be called exactly once, so it is safe to call it at any time before.
import os
from typing import List
import weakref
class Package:
def __init__(self):
self.files = []
self._finalizer = weakref.finalize(self, self._cleanup_files, self.files)
#staticmethod
def _cleanup_files(files: List):
for file in files:
os.unlink(file)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self._finalizer()
weakref.finalize returns a callable finalizer object which will be called when obj is garbage collected. Unlike an ordinary weak reference, a finalizer will always survive until the reference object is collected, greatly simplifying lifecycle management."
Unlike atexit.register the object is not held in memory until the interpreter is shut down.
And unlike object.__del__, weakref.finalize is guaranteed to be called at interpreter shutdown. So it is much more safe.
atexit.register is the standard way as has already been mentioned in ostrakach's answer.
However, it must be noted that the order in which objects might get deleted cannot be relied upon as shown in example below.
import atexit
class A(object):
def __init__(self, val):
self.val = val
atexit.register(self.hello)
def hello(self):
print(self.val)
def hello2():
a = A(10)
hello2()
a = A(20)
Here, order seems legitimate in terms of reverse of the order in which objects were created as program gives output as :
20
10
However when, in a larger program, python's garbage collection kicks in object which is out of it's lifetime would get destructed first.
Following this related question, while there are always examples of some library using a language feature in a unique way, I was wondering whether returning a value other than self in an __enter__ method should be considered an anti-pattern.
The main reason why this seems to me like a bad idea is that it makes wrapping context managers problematic. For example, in Java (also possible in C#), one can wrap an AutoCloseable class in another class which will take care of cleaning up after the inner class, like in the following code snippet:
try (BufferedReader reader =
new BufferedReader(new FileReader("src/main/resources/input.txt"))) {
return readAllLines(reader);
}
Here, BufferedReader wraps FileReader, and calls FileReader's close() method inside its own close() method. However, if this was Python, and FileReader would've returned an object other than self in its __enter__ method, this would make such an arrangement significantly more complicated. The following issues would have to be addressed by the writer of BufferedReader:
When I need to use FileReader for my own methods, do I use FileReader directly or the object returned by its __enter__ method? What methods are even supported by the returned object?
In my __exit__ method, do I need to close only the FileReader object, or the object returned in the __enter__ method?
What happens if __enter__ actually returns a different object on its call? Do I now need to keep a collection of all of the different objects returned by it in case someone calls __enter__ several times on me? How do I know which one to use when I need to use on of these objects?
And the list goes on. One semi-successful solution to all of these problems would be to simply avoid having one context manager class clean up after another context manager class. In my example, that would mean that we would need two nested with blocks - one for the FileReader, and one for the BufferedReader. However, this makes us write more boilerplate code, and seems significantly less elegant.
All in all, these issues lead me to believe that while Python does allow us to return something other than self in the __enter__ method, this behavior should simply be avoided. Is there some official or semi-official remarks about these issues? How should a responsible Python developer write code that addresses these issues?
TLDR: Returning something other than self from __enter__ is perfectly fine and not bad practice.
The introducing PEP 343 and Context Manager specification expressly list this as desired use cases.
An example of a context manager that returns a related object is the
one returned by decimal.localcontext(). These managers set the active
decimal context to a copy of the original decimal context and then
return the copy. This allows changes to be made to the current decimal
context in the body of the with statement without affecting code
outside the with statement.
The standard library has several examples of returning something other than self from __enter__. Notably, much of contextlib matches this pattern.
contextlib.contextmanager produces context managers which cannot return self, because there is no such thing.
contextlib.closing wraps a thing and returns it on __enter__.
contextlib.nullcontext returns a pre-defined constant
threading.Lock returns a boolean
decimal.localcontext returns a copy of its argument
The context manager protocol makes it clear what is the context manager, and who is responsible for cleanup. Most importantly, the return value of __enter__ is inconsequential for the protocol.
A rough paraphrasing of the protocol is this: When something runs cm.__enter__, it is responsible for running cm.__exit__. Notably, whatever code does that has access to cm (the context manager itself); the result of cm.__enter__ is not needed to call cm.__exit__.
In other words, a code that takes (and runs) a ContextManager must run it completely. Any other code does not have to care whether its value comes from a ContextManager or not.
# entering a context manager requires closing it…
def managing(cm: ContextManager):
value = cm.__enter__() # must clean up `cm` after this point
try:
yield from unmanaged(value)
except BaseException as exc:
if not cm.__exit__(type(exc), exc, exc.__traceback__):
raise
else:
cm.__exit__(None, None, None)
# …other code does not need to know where its values come from
def unmanaged(smth: Any):
yield smth
When context managers wrap others, the same rules apply: If the outer context manager calls the inner one's __enter__, it must call its __exit__ as well. If the outer context manager already has the entered inner context manager, it is not responsible for cleanup.
In some cases it is in fact bad practice to return self from __enter__. Returning self from __enter__ should only be done if self is fully initialised beforehand; if __enter__ runs any initialisation code, a separate object should be returned.
class BadContextManager:
"""
Anti Pattern: Context manager is in inconsistent state before ``__enter__``
"""
def __init__(self, path):
self.path = path
self._file = None # BAD: initialisation not complete
def read(self, n: int):
return self._file.read(n) # fails before the context is entered!
def __enter__(self) -> 'BadContextManager':
self._file = open(self.path)
return self # BAD: self was not valid before
def __exit__(self, exc_type, exc_val, tb):
self._file.close()
class GoodContext:
def __init__(self, path):
self.path = path
self._file = None # GOOD: Inconsistent state not visible/used
def __enter__(self) -> TextIO:
if self._file is not None:
raise RuntimeError(f'{self.__class__.__name__} is not re-entrant')
self._file = open(self.path)
return self._file # GOOD: value was not accessible before
def __exit__(self, exc_type, exc_val, tb):
self._file.close()
Notably, even though GoodContext returns a different object, it is still responsible to clean up. Another context manager wrapping GoodContext does not need to close the return value, it just has to call GoodContext.__exit__.
There are essentially three ways to use the with statement:
Use an existing context manager:
with manager:
pass
Create a context manager and bind its result to a variable:
with Manager() as result:
pass
Create an context manager and discard its return value:
with Manager():
pass
If we have place a function get_manager() inside the three with blocks above, is there any implementation that can return the enclosing context manager, or at least their __exit__ function?
It's obviously easy in the first case, but I can't think of a way to make it work in the other two. I doubt it's possible to get the entire context manager, since the value stack is popped immediately after the SETUP_WITH opcode. However, since the __exit__ function is stored on the block stack by SETUP_WITH, is there some way to access it?
Unfortunately, as discussed in the comments, this is not possible in all cases. When a context manager is created, the following code is run (in cPython 2.7, at least. I can't comment on other implementations):
case SETUP_WITH:
{
static PyObject *exit, *enter;
w = TOP();
x = special_lookup(w, "__exit__", &exit);
if (!x)
break;
SET_TOP(x);
/* more code follows... */
}
The __exit__ method is pushed onto a stack with the SET_TOP macro, which is defined as:
#define SET_TOP(v) (stack_pointer[-1] = (v))
The stack pointer, in turn, is set to the top of the frame's value stack at the start of frame eval:
stack_pointer = f->f_stacktop;
Where f is a frame object defined in frameobject.h. Unfortunately for us, this is where the trail stops. The python accessible frame object is defined with the following methods only:
static PyMemberDef frame_memberlist[] = {
{"f_back", T_OBJECT, OFF(f_back), RO},
{"f_code", T_OBJECT, OFF(f_code), RO},
{"f_builtins", T_OBJECT, OFF(f_builtins),RO},
{"f_globals", T_OBJECT, OFF(f_globals), RO},
{"f_lasti", T_INT, OFF(f_lasti), RO},
{NULL} /* Sentinel */
};
Which, unfortunaltey, does not include the f_valuestack that we would need. This makes sense, since f_valuestack is of the type PyObject **, which would need to be wrapped in an object to be accessible from python any way.
TL;DR: The __exit__ method we're looking for is only located in one place, the value stack of a frame object, and cPython doesn't make the value stack accessible to python code.
The difference between this case and similar-appearing cases like super is that here there is no enclosing frame to look at. A with statement is not a new scope. sys._getframe(0) (or, if you're putting the code into a function, sys._getframe(1)) will work just fine, but it'll return you the exact same frame you have before and after the with statement.
The only way you could do it would be by inspecting the bytecode. But even that won't help. For example, try this:
from contextlib import contextmanager
#contextmanager
def silly():
yield
with silly():
fr = sys._getframe(0)
dis.dis(fr.f_code)
Obviously, as SETUP_WITH explains, the method does get looked up and pushed onto the stack for WITH_CLEANUP to use later. So, even after POP_TOP removes the return value of silly(), its __exit__ is still on the stack.
But there's no way to get at that from Python. Unless you want to start munging the bytecode, or digging apart the stack with ctypes or something, it might as well not exist.
If the context manager is a class and only ever has a single instance, then you could find it on the heap:
import gc
class ConMan(object):
def __init__(self, name):
self.name = name
def __enter__(self):
print "enter %s" % self.name
def found(self):
print "You found %s!" % self.name
def __exit__(self, *args):
print "exit %s" % self.name
def find_single(typ):
single = None
for obj in gc.get_objects():
if isinstance(obj, typ):
if single is not None:
raise ValueError("Found more than one")
single = obj
return single
def foo():
conman = find_single(ConMan)
conman.found()
with ConMan('the-context-manager'):
foo()
(Disclaimer: Don't do this)
If you will accept a hacky solution, I bring you one inspired by this.
Have the context manager edit the local namespace.
class Context(object):
def __init__(self, locals_reference):
self.prev_context = locals_reference.get('context', None)
self.locals_reference = locals_reference
def __enter__(self):
self.locals_reference['context'] = self
def __exit__(self, exception_type, exception_value, traceback):
if self.prev_context is not None:
self.locals_reference['context'] = self.prev_context
else:
del self.locals_reference['context']
You can then get the context with the context variable
with Context(locals()):
print(context)
This implementation also works on nested contexts
with Context(locals()):
c_context = context
with Context(locals()):
print(c_context == context)
print(c_context == context)
However this is implementation specific, as the return value of locals may be a copy of the namespace. Tested on CPython 3.10.
Edit:
The implementation above will not work in functions from other modules (I wonder why), so here is a function that fetches the context:
def get_current_context(cls) -> "Context | None":
try:
if context is not None:
return context
except NameError:
pass
i = 0
while True:
try:
c = sys._getframe(i).f_locals.get('context',None)
except ValueError:
return None
if c is not None:
return c
i += 1
I would make it a classmethod of the context manager class.
Many years late, here's a straightforward way to do this in the second of the OP's 3 cases, where with ... as is used to bind the output of the context manager's __enter__ method to a variable. You can have the __enter__ method return the context manager itself, or its __exit__ method if that's all you're interested in.
class Manager:
def __enter__(self): return self
def __exit__(self, *args): print("Doing exit method stuff!")
with Manager() as manager:
print("Doing some stuff before manually calling the exit method...")
manager.__exit__() # call the exit method
print("Doing some more stuff before exiting for real...")
Of course, this would interfere with using with ... as to bind some other return-value from __enter__, but it would be straightforward to have __enter__ return a tuple consisting of its ordinary return value and the manager, just as you can make a function return multiple values.
As the OP noted, it's also straightforward to call the __exit__ method in the first case, where the context manager had already been assigned to a variable beforehand. So the only really tricky case is the third one where the context manager is simply created via with Manager(): but is never assigned to a variable. My advice would be: if you're going to want to refer to the manager (or its methods) later, then either (1) assign it a name beforehand, (2) have its __enter__ method return a reference to it for with ... as to capture as I did above, but (3) DO NOT create it without storing any reference to it!
I am trying to understand the with statement. I understand that it is supposed to replace the try/except block.
Now suppose I do something like this:
try:
name = "rubicon" / 2 # to raise an exception
except Exception as e:
print("No, not possible.")
finally:
print("OK, I caught you.")
How do I replace this with a context manager?
with doesn't really replace try/except, but, rather, try/finally. Still, you can make a context manager do something different in exception cases from non-exception ones:
class Mgr(object):
def __enter__(self): pass
def __exit__(self, ext, exv, trb):
if ext is not None: print "no not possible"
print "OK I caught you"
return True
with Mgr():
name='rubicon'/2 #to raise an exception
The return True part is where the context manager decides to suppress the exception (as you do by not re-raising it in your except clause).
The contextlib.contextmanager function decorator provides a handy way of providing a context manager without the need to write a full-fledged ContextManager class of your own (with __enter__ and __exit__ methods, so you don't have to remember the arguments to the __exit__ method, or that the __exit__ method must return True in order to suppress the exception). Instead, you write a function with a single yield at the point you want the with block to run, and you trap any exceptions (that effectively come from the yield) as you normally would.
from contextlib import contextmanager
#contextmanager
def handler():
# Put here what would ordinarily go in the `__enter__` method
# In this case, there's nothing to do
try:
yield # You can return something if you want, that gets picked up in the 'as'
except Exception as e:
print "no not possible"
finally:
print "Ok I caught you"
with handler():
name='rubicon'/2 #to raise an exception
Why go to the extra trouble of writing a context manager? Code re-use. You can use the same context manager in multiple places, without having to duplicate the exception handling. If the exception handling is unique to that situation, then don't bother with a context manager. But if the same pattern crops up again and again (or if it might for your users, e.g., closing a file, unlocking a mutex), it's worth the extra trouble. It's also a neat pattern to use if the exception handling is a bit complicated, as it separates the exception handling from the main line of code flow.
The with in Python is intended for wrapping a set of statements where you should set up and destroy or close resources. It is in a way similar to try...finally in that regard as the finally clause will be executed even after an exception.
A context manager is an object that implements two methods: __enter__ and __exit__. Those are called immediately before and after (respectively) the with block.
For instance, take a look at the classic open() example:
with open('temp.txt', 'w') as f:
f.write("Hi!")
Open returns a File object that implements __enter__ more or less like return self and __exit__ like self.close().
The Components of Context Manager
You should implement an __enter__ method that returns an object
Implement a __exit__ method.
Example
I will give a simple example to show you why we need a context manager. During the winter of Xinjiang, China, you should close a door immediately when you open a door. if you forget to close it, you will get cold.
class Door:
def __init__(self):
self.doorstatus='the door was closed when you are not at home'
print(self.doorstatus)
def __enter__(self):
print('I have opened the door')
return self
def __exit__(self,*args):
print('pong!the door has closed')
def fetchsomethings(self):
print('I have fetched somethings')
when fetch things at home, you should open a door, fetch somethings and close the door.
with Door() as dr:
dr.fetchsomethings()
the output is:
the door was closed when you are not at home
I have opened the door
I have fetched somethings
pong!the door has closed
Explanation
when you initiate a Door class, it will call __init__ method that will print
"the door was closed when you are not in home" and __enter__ method that will print "I have opened the door" and return a door instance called dr. when call self.fetchsomethings in with block, the method will print "I have fetched somethings".when the block is finished.the context manager will invoke __exit__
method and it will print "pong!the door has closed" .when you do not use with
keyword ,__enter__and __exit__ will not be invoked!!!!
with statements or context managers are there to aid with resources (although may be used for much more).
Let's say you opened a file for writing:
f = open(path, "w")
You now have an open file handle. During the handling of your file, no other program can write to it. In order to let other programs write to it, you must close the file handle:
f.close()
But, before closing your file an error occured:
f = open(path, "w")
data = 3/0 # Tried dividing by zero. Raised ZeroDivisionError
f.write(data)
f.close()
What will happen now is that the function or entire program will exit, while leaving your file with an open handle. (CPython cleans handles on termination and handles are freed together with a program but you shouldn't count on that)
A with statement ensures that as soon as you leave it's indentation, it will close the file handle:
with open(path, "w") as f:
data = 3/0 # Tried dividing by zero. Raised ZeroDivisionError
f.write(data)
# In here the file is already closed automatically, no matter what happened.
with statements may be used for many more things. For example: threading.Lock()
lock = threading.Lock()
with lock: # Lock is acquired
do stuff...
# Lock is automatically released.
Almost everything done with a context manager can be done with try: ... finally: ... but context managers are nicer to use, more comfortable, more readable and by implementing __enter__ and __exit__ provide an easy to use interface.
Creating context managers is done by implementing __enter__() and __exit__() in a normal class.
__enter__() tells what to do when a context manager starts and __exit__() when a context manager exists (giving the exception to the __exit__() method if an exception occurred)
A shortcut for creating context managers can be found in contextlib. It wraps a generator as a context manager.
Managing Resources: In any programming language, the usage of resources like file operations or database connections is very common. But these resources are limited in supply. Therefore, the main problem lies in making sure to release these resources after usage. If they are not released then it will lead to resource leakage and may cause the system to either slow down or crash. It would be very helpful if users have a mechanism for the automatic setup and teardown of resources. In Python, it can be achieved by the usage of context managers which facilitate the proper handling of resources. The most common way of performing file operations is by using the keyword as shown below:
# Python program showing a use of with keyword
with open("test.txt") as f:
data = f.read()
When a file is opened, a file descriptor is consumed which is a limited resource. Only a certain number of files can be opened by a process at a time. The following program demonstrates it.
file_descriptors = []
for x in range(100000):
file_descriptors.append(open('test.txt', 'w'))
it lead the error: OSError: [Errno 24] Too many open files: 'test.txt'
Python provides an easy way to manage resources: Context Managers. The with keyword is used. When it gets evaluated it should result in an object that performs context management. Context managers can be written using classes or functions(with decorators).
Creating a Context Manager: When creating context managers using classes, user need to ensure that the class has the methods: __enter__() and __exit__(). The __enter__() returns the resource that needs to be managed and the __exit__() does not return anything but performs the cleanup operations. First, let us create a simple class called ContextManager to understand the basic structure of creating context managers using classes, as shown below:
# Python program creating a context manager
class ContextManager():
def __init__(self):
print('init method called')
def __enter__(self):
print('enter method called')
return self
def __exit__(self, exc_type, exc_value, exc_traceback):
print('exit method called')
with ContextManager() as manager:
print('with statement block')
Output:
init method called
enter method called
with statement block
exit method called
In this case, a ContextManager object is created. This is assigned to the variable after the keyword i.e manager. On running the above program, the following get executed in sequence:
__init__()
__enter__()
statement body (code inside the with block)
__exit__()[the parameters in this method are used to manage exceptions]
Suppose there is a program with a couple of objects living in it at runtime.
Is the __del__ method of each object called when the programs ends?
If yes I could for example do something like this:
class Client:
__del__( self ):
disconnect_from_server()
There are many potential difficulties associated with using __del__.
Usually, it is not necessary, or the best idea to define it yourself.
Instead, if you want an object that cleans up after itself upon exit or an exception, use a context manager:
per Carl's comment:
class Client:
def __exit__(self,ext_type,exc_value,traceback):
self.disconnect_from_server()
with Client() as c:
...
original answer:
import contextlib
class Client:
...
#contextlib.contextmanager
def make_client():
c=Client()
yield c
c.disconnect_from_server()
with make_client() as c:
...
I second the general idea of using context managers and the with statement instead of relying on __del__ (for much the same reasons one prefers try/finally to finalizer methods in Java, plus one: in Python, the presence of __del__ methods can make cyclic garbage uncollectable).
However, given that the goal is to have "an object that cleans up after itself upon exit or an exception", the implementation by #~unutbu is not correct:
#contextlib.contextmanager
def make_client():
c=Client()
yield c
c.disconnect_from_server()
with make_client() as c:
...
If an exception is raised in the ... part, disconnect_from_server_ does not get called (since the exception propagates through make_client, being uncaught there, and therefore terminates it while it's waiting at the yield).
The fix is simple:
#contextlib.contextmanager
def make_client():
c=Client()
try: yield c
finally: c.disconnect_from_server()
Essentially, the with statement lets you almost forget about the good old try/finally statement... except when you're writing context managers with contextlib, and then it's really important to remember it!-)
Consider using with-statement to make cleanup explicit.
With circular references __del__ is not called:
class Foo:
def __del__(self):
self.p = None
print "deleting foo"
a = Foo()
b = Foo()
a.p = b
b.p = a
prints nothing.
Yes, the Python interpreter tidies up at shutdown, including calling the __del__ method of every object (except objects that are part of a reference cycle).
Although, as others have pointed out, __del__ methods are very fragile and should be used with caution.