Please help me in clarifying the concept of these two python statements in terms of difference in functionality:
sys.exit(0)
os._exit(0)
According to the documentation:
os._exit():
Exit the process with status n, without calling cleanup handlers, flushing stdio buffers, etc.
Note The standard way to exit is sys.exit(n). _exit() should normally only be used in the child process after a fork().
os._exit calls the C function _exit() which does an immediate program
termination. Note the statement "can never return".
sys.exit() is identical to raise SystemExit(). It raises a Python
exception which may be caught by the caller.
Original post: http://bytes.com/topic/python/answers/156121-os-_exit-vs-sys-exit
Excerpt from the book "The linux Programming Interface":
Programs generally don’t call _exit() directly, but instead call the exit() library function,
which performs various actions before calling _exit().
Exit handlers (functions registered with at_exit() and on_exit()) are called, in
reverse order of their registration
The stdio stream buffers are flushed.
The _exit() system call is invoked, using the value supplied in status.
Could someone expand on why _exit() should normally only be used in the child process after a fork()?
Instead of calling exit(), the child can call _exit(), so that it doesn’t flush stdio
buffers. This technique exemplifies a more general principle: in an application
that creates child processes, typically only one of the processes (most often the
parent) should terminate via exit(), while the other processes should terminate
via _exit(). This ensures that only one process calls exit handlers and flushes
stdio buffers, which is usually desirable
Related
According to Python documentation:
It is not guaranteed that __del__() methods are called for objects that still exist when the interpreter exits.
I know that in older versions of Python cyclic referencing would be one of the examples for this behaviour, however as I understand it, in Python 3 such cycles will successfully be destroyed upon interpreter exit.
I'm wondering what are the cases (as close to exhaustive list as possible) when the interpreter would not destroy an object upon exit.
All examples are implementation details - Python does not promise whether or not it will call __del__ for any particular objects on interpreter exit. That said, one of the simplest examples is with daemon threads:
import threading
import time
def target():
time.sleep(1000)
class HasADel:
def __del__(self):
print('del')
x = HasADel()
threading.Thread(target=target, daemon=True).start()
Here, the daemon thread prevents the HasADel instance from being garbage collected on interpreter shutdown. The daemon thread doesn't actually do anything with that object, but Python can't clean up references the daemon thread owns, and x is reachable from references the daemon thread owns.
When the interpreter exits normally, in such ways as the program ending or sys.exit being called, not all objects are guaranteed to be destroyed. There is probably some amount of logic to this, but not very simple logic. After all, the __del__ method is for freeing memory resources, not other resources (like network connections) - that's what __enter__ and __exit__ are for.
Having said that, there are situtations in which __del__ will most certainly not be called. The parallel to this is atexit functions; they are usually run at exit. However:
Note: The functions registered via this module are not called when the program is killed by a signal not handled by Python, when a Python fatal internal error is detected, or when os._exit() is called.
atexit documentation
So, there are situations in which clean-up functions, like __del__, __exit__, and functions registered with atexit will not be called:
The program is killed by a signal not handled by Python - If a program recieves a signal to stop, like SIGINT or SIGQUIT, and it doesn't handle the signal, then it will be stopped.
A Python fatal interpreter error occurs.
os._exit() is called - the documentation says:
Exit the process with status n, without calling cleanup handlers, flushing stdio buffers, etc.
So it is pretty clear that __del__ should not be called.
In conclusion, the interpreter does not guarantee __del__ being called, but there are situations in which it will definitely not be called.
After comparing the quoted sentence from documentation and your title, I thought you misunderstood what __del__ is and what it does.
You used the word "destroyed", and documentation said __del__ may not get called in some situations... The thing is "all" objects are get deleted after the interpreter's process finishes. __del__ is not a destructor and has nothing to do with the destruction of objects. Even if a memory leakage occurs in a process, operating systems(the ones I know at least: Linux, Windows,...) will eventually reclaim that memory for the process after it finishes. So everything is destroyed/deleted!(here and here)
In normal cases when these objects are about to get destroyed, __del__ (better known as finalizer) gets called in the very last step of destruction. In other cases mentioned by other answers, It doesn't get called.
That's why people say don't count on __del__ method for cleaning vital stuff and instead use a context manager. In some scenarios, __del__ may even revive the object by passing a reference around.
When running my code I start a thread that runs for around 50 seconds and does a lot of background stuff. If I run this program and then close it soon after, the stuff still goes on in the background for a while because the thread never dies. How can I kill the thread gracefully in my closeEvent method in my MianWindow class? I've tried setting up a method called exit(), creating a signal 'quitOperation' in the thread in question, and then tried to use
myThread.quitOperation.emit()
I expected that this would call my exit() function in my thread because I have this line in my constructor:
self.quitOperation.connect(self.exit)
However, when I use the first line it breaks, saying that 'myThread' has no attribute 'quitOperation'. Why is this? Is there a better way?
I'm not sure for python, but I assume this myThread.quitOperation.emit() emits a signal for the thread to exit. The point is that while your worker is using the thread and does not return, nor runs QCoreApplication::processEvents(), myThread will never have the chance to actually process your request (this is called thread starvation).
Correct answer may depend on the situation, and the nature of the "stuff" your thread is doing. The most common practice is that the main thread sends a signal to the worker thread where a slot sets a flag. In the blocking process you regularly check this flag. It it is set you stop whatever "stuff" you are doing, tell your worker thread that it can quit (with a signal preferably with queued connection), call a deleteLater() on the worker object itself, and return from any functions you are currently in, so that the thread's event handler can run, and clear your worker object and itself up, the finally quit.
In case your "stuff" is a huge cycle of very fast operation like simple mathematics or directory navigation one-by-one that takes only a few milliseconds each, this will be enough.
In case your "stuff" contain huge blocking parts that you have no control of (an thus you can't place this flag checking call in it), you may need to wait in the main thread until the worker thread quits.
In case you use direct connect to set the flag, or you set it directly, be sure to protect the read/write access of the flag with a QMutex to prevent inconsistent reads, or user a queued connection to ensure single thread access of the flag.
While highly discouraged, optionally you can use QThread's terminate() method to instantaneously kill the thread. You should never do this as it may cause memory leak, heap corruption, resource leaking and any nasty stuff as destructors and clean-up codes will not run, and the execution can be halted at an undesired state.
I have read most of the similar questions in stackoverflow, but none see to solve my problem. I use ctypes to call a function from dll file. Therefore, I can't edit the source codes of the dll file to add any "end looping" conditions. Also, this function may last long (like some printing command). I need to design a "halt" command in case that something of emergency happens while printing is processed. The only way I can do is to kill the thread.
It is never good to forcibly kill a thread. Your program should be designed to cleanly exit from threads.
You can mark it as "daemon" before starting it. If you exit the main thread it will not wait on daemonized threads.
Terminating a thread can still be done in two ways. You can asynchronously raise a Python exception in a thread, via https://docs.python.org/2/c-api/init.html#c.PyThreadState_SetAsyncExc (as stated, this requires building a C module or using ctypes to make it work). The other approach on Windows is to call the Windows API TerminateThread():
TerminateThread is used to cause a thread to exit. When this occurs,
the target thread has no chance to execute any user-mode code. DLLs
attached to the thread are not notified that the thread is
terminating. The system frees the thread's initial stack.
[...]
TerminateThread is a dangerous function that should only be used in
the most extreme cases. You should call TerminateThread only if you
know exactly what the target thread is doing, and you control all of
the code that the target thread could possibly be running at the time
of the termination. For example, TerminateThread can result in the
following problems: ...
I think this should also be doable using ctypes.
You cannot safely terminate a thread without its cooperation. Threads are not isolated within a process, so unsafely terminating a thread contaminates the process. Please, don't go down this road.
If you need this kind of isolation, you need a process. You can safely terminate a process without its cooperation, though it may leave system objects (such as files) that the process was working on in an intermediate state. In your case, that may mean a print job half-done and a page halfway in the printer. Or it may mean temporary files that don't get removed.
Which is the correct function to call to exit the child process after os.fork()?
The documentation for os._exit() states:
The standard way to exit is sys.exit(n).
_exit() should normally only be used in the child process after a fork().
It does not say whether it's acceptable to terminate the child process using sys.exit(). So:
Is it?
Are there any potential side effects of doing so?
The unix way is that if you are a child of a fork then you call _exit. The main difference between exit and _exit is that exit tidies up more - calls the atexit handlers, flushes stdio etc, whereas _exit does the minimum amount of stuff in userspace, just getting the kernel to close all its files etc.
This translates pretty directly into the python world with sys.exit doing what exit does and doing more of the python interpreter shutdown, where os._exit does the minimum possible.
If you are a child of fork and you call exit rather than _exit then you may end up calling exit handlers that the parent will call again when it exits causing undefined behaviour.
Part of the documentation on os._exit(n) you did not cite is
Exit the process with status n, without calling cleanup handlers, flushing stdio buffers, etc.
So, how I'm reading this, you should use os._exit() as long as you share file handlers (so they will be close()'d by another process and you take care of flushing the buffers yourself (if it matters in your case). Without shared resources (like in "files") - it doesn't matter.
So if your child processes are computation-only, and are fed raw data (not resource handlers), then it's safe to use exit().
I have a python cgi script that accepts user uploads (via sys.stdin.read).
After receiving the file (whether successfully or unsuccessfully), the script needs to do some cleanup. This works fine when upload finishes correctly, however if the user closes the client, the cgi script is silently killed on the server, and as a result no cleanup code gets executed. How can i force the script to always finish.
You can trap the exit signal with the signal module. Haven't tried this with mod_python though.
http://docs.python.org/library/signal.html
Note in the docs:
When a signal arrives during an I/O operation, it is possible that the I/O operation raises an exception after the signal handler returns. This is dependent on the underlying Unix system’s semantics regarding interrupted system calls.
You may need to catch I/O exceptions for the broken pipe and/or file write if you don't sys.exit from your handler.
The script is probably not killed silently; you just don't see the exception which python throws. I suggest to wrap the whole script in try-except and write any exception to a log file.
This way, you can see what really happens. The logging module is your friend.
You may be able to use the atexit module.
http://docs.python.org/library/atexit.html
From the documentation:
The atexit module defines a single function to register cleanup functions. Functions thus registered are automatically executed upon normal interpreter termination.
Note: the functions registered via this module are not called when the program is killed by a signal, when a Python fatal internal error is detected, or when os._exit() is called.
This is an alternate interface to the functionality provided by the sys.exitfunc variable.