I use __init__() functions a lot in Python classes to set up things when a class is first called.
Is there an equivalent function that is called when a script is shutting down?
There is the __del__ method which is called when an object is finalized. However, python doesn't guarantee that __del__ will actually be called on objects when the interpreter exits.
There are a few alternatives:
atexit.register -- Here you can register a function to run when your script terminates
create a context manager and use the with statement. Then your context manager's __exit__ method will be called unconditionally when you leave the context.
both of these options would fail if you did something really nasty to exit your program (e.g. somehow causing a segmentation fault or exiting via os._exit)
Related
According to Python documentation:
It is not guaranteed that __del__() methods are called for objects that still exist when the interpreter exits.
I know that in older versions of Python cyclic referencing would be one of the examples for this behaviour, however as I understand it, in Python 3 such cycles will successfully be destroyed upon interpreter exit.
I'm wondering what are the cases (as close to exhaustive list as possible) when the interpreter would not destroy an object upon exit.
All examples are implementation details - Python does not promise whether or not it will call __del__ for any particular objects on interpreter exit. That said, one of the simplest examples is with daemon threads:
import threading
import time
def target():
time.sleep(1000)
class HasADel:
def __del__(self):
print('del')
x = HasADel()
threading.Thread(target=target, daemon=True).start()
Here, the daemon thread prevents the HasADel instance from being garbage collected on interpreter shutdown. The daemon thread doesn't actually do anything with that object, but Python can't clean up references the daemon thread owns, and x is reachable from references the daemon thread owns.
When the interpreter exits normally, in such ways as the program ending or sys.exit being called, not all objects are guaranteed to be destroyed. There is probably some amount of logic to this, but not very simple logic. After all, the __del__ method is for freeing memory resources, not other resources (like network connections) - that's what __enter__ and __exit__ are for.
Having said that, there are situtations in which __del__ will most certainly not be called. The parallel to this is atexit functions; they are usually run at exit. However:
Note: The functions registered via this module are not called when the program is killed by a signal not handled by Python, when a Python fatal internal error is detected, or when os._exit() is called.
atexit documentation
So, there are situations in which clean-up functions, like __del__, __exit__, and functions registered with atexit will not be called:
The program is killed by a signal not handled by Python - If a program recieves a signal to stop, like SIGINT or SIGQUIT, and it doesn't handle the signal, then it will be stopped.
A Python fatal interpreter error occurs.
os._exit() is called - the documentation says:
Exit the process with status n, without calling cleanup handlers, flushing stdio buffers, etc.
So it is pretty clear that __del__ should not be called.
In conclusion, the interpreter does not guarantee __del__ being called, but there are situations in which it will definitely not be called.
After comparing the quoted sentence from documentation and your title, I thought you misunderstood what __del__ is and what it does.
You used the word "destroyed", and documentation said __del__ may not get called in some situations... The thing is "all" objects are get deleted after the interpreter's process finishes. __del__ is not a destructor and has nothing to do with the destruction of objects. Even if a memory leakage occurs in a process, operating systems(the ones I know at least: Linux, Windows,...) will eventually reclaim that memory for the process after it finishes. So everything is destroyed/deleted!(here and here)
In normal cases when these objects are about to get destroyed, __del__ (better known as finalizer) gets called in the very last step of destruction. In other cases mentioned by other answers, It doesn't get called.
That's why people say don't count on __del__ method for cleaning vital stuff and instead use a context manager. In some scenarios, __del__ may even revive the object by passing a reference around.
Suppose I have an item that interfaces with a C library. The interface allocates memory.
The __del__ method takes care of everything, but there is no guarantee the __del__ method will be called in an imperative python3 runtime.
So, I have overloaded the context manager functions and can declare my item 'with':
with Foo(**kwargs) as foo:
foo.doSomething()
# no memory leaks
However, I am now exposing my foo in an __init__.py, and am curious how I could possibly expose the context manager object in a way that allows a user to use it without using it inside of a 'with' block.
Is there a way I can open/construct my Foo inside my module such that there is a guarantee __del__ will be called (or the context manager exit function) so that it is exposed for use, but doesn't expose a daemon or other long term process to risk of memory loss?
Or is deletion implied when an object is constructed implicitly via import, even though it ~may or may not~ occur when the object is constructed in the runtime scope?
Although this should probably not be the case, or at least might not always be the case from version to version...
...Python3.9 does indeed call __del__ on objects initialized prior to importation.
I can't accept this answer because I have no way of directly proving that it is not possible python3 does not call the __del__, but it has not failed to call it so far, whereas I get a memory leak every time I do not dispose the object properly after declaring it during runtime flow.
I am bit of a python newbie, but I am implementing a benchmarking tool in python that will for example create several sets of resources which depend on each other. And when the program goes out of scope, I want to cleanup the resources in the correct order.
I'm from a C++ background, in C++ I know I can do this with RAII (constructors, destructors).
What is an equivalent pattern in pattern for this problem? Is there a way to do RAII in python or there is a better way to solve this problem?
You are probably looking for a context manager, which is an object that can be used in a with statement:
with context() as c:
do_something(c)
When the with statement is entered, the expression (in this case, context()) will be evaluated, and should return a context manager. __enter__() will be called on the context manager, and the result (which may or may not be the same object as the context manager) is assigned to the variable specified with as. No matter how control is exiting the with body, __exit__() will be called on the context manager, with arguments that specify whether an exception was thrown or not.
As an example: the builtin open() should be used in this way in order to close the opened file after interacting with it.
A new context manager type can easily be defined with contextlib.
For a more one-off solution, you can use try/finally: the finally block is executed after the try block, no matter how control exits the try block:
try:
do_something()
finally:
cleanup()
If i define a python thread extending threading.Thread class and overriding run I can then invoke run() instead of start() and use it in the caller thread instead of a separate one.
i.e.
class MyThread(threading.thread):
def run(self):
while condition():
do_something()
this code (1) will execute "run" method this in a separate thread
t = MyThread()
t.start()
this code (2) will execute "run" method in the current thread
t = MyThread()
t.run()
Are there any practical disadvantages in using this approach in writing code that can be executed in either way? Could invoking directly the "run" of a Thread object cause memory problems, performance issues or some other unpredictable behavior?
In other words, what are the differences (if any notable, i guess some more memory will be allocated but It should be negligible) between invoking the code (2) on MyThread class and another identical class that extends "object" instead of "threading.Tread"
I guess that some (if any) of the more low level differences might depend on the interpreter. In case this is relevant i'm mainly interested in CPython 3.*
There will be no difference in the behavior of run when you're using the threading.Thread object, or an object of a threading.Thread's subclass, or an object of any other class that has the run method:
threading.Thread.start starts a new thread and then runs run in this thread.
run starts the activity in the calling thread, be it the main thread or another one.
If you run run in the main thread, the whole thread will be busy executing the task run is supposed to execute, and you won't be able to do anything until the task finishes.
That said, no, there will be no notable differences as the run method behaves just like any other method and is executed in the calling thread.
I looked into the code implementing threading.Thread class in cpython 3. The init method simply assigns some variables and do not do anything that seems related to actually create a new thread. Therefore we can assume that it should be safe use a threading.Thread object in the proposed manner.
I'm doing some processing in a __del__() destructor of a Python object that I don't want to happen if the program exited via exception. Is there a way to check from __del__() if I'm in the middle of normal exit or exception unwinding?
Alternatively is there some way to check for the same condition in the atexit function?
Do not use a __del__ for this. The __del__ hook is not even guaranteed to be called:
It is not guaranteed that __del__() methods are called for objects that still exist when the interpreter exits.
Instead, manage the cache with a Context Manager and only mark the cache as reusable when the __exit__() method is called with the exc_type set to None.