Check if a python program exited with an exception - python

I'm doing some processing in a __del__() destructor of a Python object that I don't want to happen if the program exited via exception. Is there a way to check from __del__() if I'm in the middle of normal exit or exception unwinding?
Alternatively is there some way to check for the same condition in the atexit function?

Do not use a __del__ for this. The __del__ hook is not even guaranteed to be called:
It is not guaranteed that __del__() methods are called for objects that still exist when the interpreter exits.
Instead, manage the cache with a Context Manager and only mark the cache as reusable when the __exit__() method is called with the exc_type set to None.

Related

Which objects are not destroyed upon Python interpreter exit?

According to Python documentation:
It is not guaranteed that __del__() methods are called for objects that still exist when the interpreter exits.
I know that in older versions of Python cyclic referencing would be one of the examples for this behaviour, however as I understand it, in Python 3 such cycles will successfully be destroyed upon interpreter exit.
I'm wondering what are the cases (as close to exhaustive list as possible) when the interpreter would not destroy an object upon exit.
All examples are implementation details - Python does not promise whether or not it will call __del__ for any particular objects on interpreter exit. That said, one of the simplest examples is with daemon threads:
import threading
import time
def target():
time.sleep(1000)
class HasADel:
def __del__(self):
print('del')
x = HasADel()
threading.Thread(target=target, daemon=True).start()
Here, the daemon thread prevents the HasADel instance from being garbage collected on interpreter shutdown. The daemon thread doesn't actually do anything with that object, but Python can't clean up references the daemon thread owns, and x is reachable from references the daemon thread owns.
When the interpreter exits normally, in such ways as the program ending or sys.exit being called, not all objects are guaranteed to be destroyed. There is probably some amount of logic to this, but not very simple logic. After all, the __del__ method is for freeing memory resources, not other resources (like network connections) - that's what __enter__ and __exit__ are for.
Having said that, there are situtations in which __del__ will most certainly not be called. The parallel to this is atexit functions; they are usually run at exit. However:
Note: The functions registered via this module are not called when the program is killed by a signal not handled by Python, when a Python fatal internal error is detected, or when os._exit() is called.
atexit documentation
So, there are situations in which clean-up functions, like __del__, __exit__, and functions registered with atexit will not be called:
The program is killed by a signal not handled by Python - If a program recieves a signal to stop, like SIGINT or SIGQUIT, and it doesn't handle the signal, then it will be stopped.
A Python fatal interpreter error occurs.
os._exit() is called - the documentation says:
Exit the process with status n, without calling cleanup handlers, flushing stdio buffers, etc.
So it is pretty clear that __del__ should not be called.
In conclusion, the interpreter does not guarantee __del__ being called, but there are situations in which it will definitely not be called.
After comparing the quoted sentence from documentation and your title, I thought you misunderstood what __del__ is and what it does.
You used the word "destroyed", and documentation said __del__ may not get called in some situations... The thing is "all" objects are get deleted after the interpreter's process finishes. __del__ is not a destructor and has nothing to do with the destruction of objects. Even if a memory leakage occurs in a process, operating systems(the ones I know at least: Linux, Windows,...) will eventually reclaim that memory for the process after it finishes. So everything is destroyed/deleted!(here and here)
In normal cases when these objects are about to get destroyed, __del__ (better known as finalizer) gets called in the very last step of destruction. In other cases mentioned by other answers, It doesn't get called.
That's why people say don't count on __del__ method for cleaning vital stuff and instead use a context manager. In some scenarios, __del__ may even revive the object by passing a reference around.

RAII in python - How to manage the lifetime of chain of resources

I am bit of a python newbie, but I am implementing a benchmarking tool in python that will for example create several sets of resources which depend on each other. And when the program goes out of scope, I want to cleanup the resources in the correct order.
I'm from a C++ background, in C++ I know I can do this with RAII (constructors, destructors).
What is an equivalent pattern in pattern for this problem? Is there a way to do RAII in python or there is a better way to solve this problem?
You are probably looking for a context manager, which is an object that can be used in a with statement:
with context() as c:
do_something(c)
When the with statement is entered, the expression (in this case, context()) will be evaluated, and should return a context manager. __enter__() will be called on the context manager, and the result (which may or may not be the same object as the context manager) is assigned to the variable specified with as. No matter how control is exiting the with body, __exit__() will be called on the context manager, with arguments that specify whether an exception was thrown or not.
As an example: the builtin open() should be used in this way in order to close the opened file after interacting with it.
A new context manager type can easily be defined with contextlib.
For a more one-off solution, you can use try/finally: the finally block is executed after the try block, no matter how control exits the try block:
try:
do_something()
finally:
cleanup()

Python Is there a shutdown equivalent to __init__ ()

I use __init__() functions a lot in Python classes to set up things when a class is first called.
Is there an equivalent function that is called when a script is shutting down?
There is the __del__ method which is called when an object is finalized. However, python doesn't guarantee that __del__ will actually be called on objects when the interpreter exits.
There are a few alternatives:
atexit.register -- Here you can register a function to run when your script terminates
create a context manager and use the with statement. Then your context manager's __exit__ method will be called unconditionally when you leave the context.
both of these options would fail if you did something really nasty to exit your program (e.g. somehow causing a segmentation fault or exiting via os._exit)

What is difference between sys.exit(0) and os._exit(0)

Please help me in clarifying the concept of these two python statements in terms of difference in functionality:
sys.exit(0)
os._exit(0)
According to the documentation:
os._exit():
Exit the process with status n, without calling cleanup handlers, flushing stdio buffers, etc.
Note The standard way to exit is sys.exit(n). _exit() should normally only be used in the child process after a fork().
os._exit calls the C function _exit() which does an immediate program
termination. Note the statement "can never return".
sys.exit() is identical to raise SystemExit(). It raises a Python
exception which may be caught by the caller.
Original post: http://bytes.com/topic/python/answers/156121-os-_exit-vs-sys-exit
Excerpt from the book "The linux Programming Interface":
Programs generally don’t call _exit() directly, but instead call the exit() library function,
which performs various actions before calling _exit().
Exit handlers (functions registered with at_exit() and on_exit()) are called, in
reverse order of their registration
The stdio stream buffers are flushed.
The _exit() system call is invoked, using the value supplied in status.
Could someone expand on why _exit() should normally only be used in the child process after a fork()?
Instead of calling exit(), the child can call _exit(), so that it doesn’t flush stdio
buffers. This technique exemplifies a more general principle: in an application
that creates child processes, typically only one of the processes (most often the
parent) should terminate via exit(), while the other processes should terminate
via _exit(). This ensures that only one process calls exit handlers and flushes
stdio buffers, which is usually desirable

Module-wide destructor in Python?

I am wondering if there is a module-wide destructor such that we can make use of it to finalize or call some specific shut down functions in a module?
For example, some handlers of the module logbook are created and pust into the stack (ex, handler1.push_application() and it is better to pop up those handlers when your program exit. It would be great to have some sort of automatically function-calls to do this and module-wide destructor is one of possible candidates I can think :)
The atexit module allows you to register cleanup functions that Python will perform on interpreter termination.

Categories