What is the workflow of processing a signal in python ? I set a signal handler, when the signal occur ,how does python invoke my function? Does the OS invoke it just like C program?
If I am in a C extend of python ,is it interrupted immediately ?
Now it's clear to me how does python process handle a signal . When you set a signal by the signal module , the module will register a function signal_handler(see $src/Modules/signalmodule.c) ,which set your handler and flag it as 1(Handlers[sig_num].tripped = 1;) , then call Py_AddPendingCall to tell python interpreter. The python interpreter will invoke Py_MakePendingCalls to call PyErr_CheckSignals which calls your function in main loop(see $src/Python/ceval.c).
communicate me if you want to talk about this : renenglish#gmail.com
If you set a Python code signal handler using the signal module the interpreter will only run it when it re-enters the byte-code interpreter. The handler is not run right away. It is placed in a queue when the signal occurs. If the code path is currently in C code, built-in or extension module, the handler is deferred until the C code returns control to the Python byte code interpreter. This can be a long time, and you can't really predict how long.
Most notably if you are using interactive mode with readline enabled your signal handler won't run until you give it some input to interpret. this is because the input code is in the readline library (C code) and doesn't return to the interpreter until it has a complete line.
Take a look at the signal module. If you invoke a signal to a python script, from my understanding if there is a handler for it will first process that signal, and potentially has the ability to handle and ignore certain signals. ie. instead of killing on a SIGKILL, you attempt to perform some shutdown cleanup work before killing.
Related
I have a program using tensorflow on a non-supported hardware, so everytime i run it, i get the "Illegal instruction (Core dumped)" error
my main goal is to capture this error. i don't want to solve it.
The error is not printed to the stderr of my program, it's printed to the stderr of bash.
then my program exists with code 33792 which is 132 (SIGILL)
And i cannot capture it using the method mentioned here, because i'm running my command using docker run and i can't pass it the curly brackets
Is there any way to capture the stdout of bash without the curly brackets?
Also how exactly is SIGILL generated? what exactly is happening behind the scenes?
Is SIGILL triggered in the parent process (bash in my case) and passed to the child process (my program)? or vice versa?
i tried adding a SIGILL handler in my program to see if i can capture it, but my program froze instead of printing the "illegal instruction" error.
I'm using Debian 11 and my program is written in python.
Edit:
The SIGILL kills my python program and my goal is to capture the SIGILL from inside my program, print some error and kill my program afterward.
I don't want the (Illegal instruction) error printed to be printed in the bash's stderr, I want it to be printed to my program's stderr or stdout.
Edit: here's the sigill handler I have in my code
def sigill_handler(sig, frame):
print("Illegal Instruction. terminating.")
signal.signal(signal.SIGILL, sigill_handler)
notice that this is the only signal I'm handling in my code
Citing https://docs.python.org/3/library/signal.html:
Execution of Python signal handlers
A Python signal handler does not get executed inside the low-level (C) signal handler. Instead, the low-level signal handler sets a flag which tells the virtual machine to execute the corresponding Python signal handler at a later point(for example at the next bytecode instruction). This has consequences:
It makes little sense to catch synchronous errors like SIGFPE or SIGSEGV that are caused by an invalid operation in C code. Python will return from the signal handler to the C code, which is likely to raise the same signal again, causing Python to apparently hang. From Python 3.3 onwards, you can use the faulthandler module to report on synchronous errors.
A long-running calculation implemented purely in C (such as regular expression matching on a large body of text) may run uninterrupted for an arbitrary amount of time, regardless of any signals received. The Python signal handlers will be called when the calculation finishes.
If the handler raises an exception, it will be raised “out of thin air” in the main thread. See the note below for a discussion.
According to https://docs.python.org/3/library/faulthandler.html, all the faulthandler can do is to dump a stack trace, so it does not help for your requirement.
What you could do is to run your possibly failing program from your own wrapper program where you can check the wait status and decide what you display to the user if the program was killed by SIGILL.
It would be better to check if your program runs on a supported platform before using any tensorflow functions.
Both Python and C allow users to install a signal handler. However, if a Python program calls C code, and that C code installs a C signal handler, then the Python program also installs a Python signal handler for the same signal, how will that signal be handled afterwards?
More specifically, what happens when users call signal.signal in Python? Does Python install, in addition to a Python signal handler, a C signal handler which will replace the old C signal handler? If so, where is the old C signal handler returned in the Python environment?
man sigaction says:
If oldact is non-NULL, the previous action is saved in oldact.
But Python signal.signal returns the old Python signal handler not the old C signal handler.
It looks like Python discards the old signal handler. Python does install its own C handler here (Python source code). This handler manages the Python signaling.
PyOS_setsig does return the old C handler, but the linked line discards it. The Python implementation of signal.signal also returns a 'previous' handler, but it is only tracking an internal list (see the variable Handlers). It is unaware of any C handlers.
So I have this code (partially taken from python docs):
import signal
def handler(signum, frame):
print 'Signal handler called with signal', signum
s = signal.signal(signal.SIGINT, handler)
some_fancy_code() # this code is using subprocess.Popen() to call another script
singal.signal(signal.SIGINT, s)
What I found right now is that if I do Ctrl+C in my program, it correctly enters that handler and prints. Now, what I thought is that after receiving Ctrl+C my handler will suppress default handler so for example my subprocess.Popen will not get the KeyboardInterrupt signal. But this is not the case.
But when we replace 'handler' with 'signal.SIG_IGN', this propagation never happens. Modified snippet:
import signal
s = signal.signal(signal.SIGINT, signal.SIG_IGN)
some_fancy_code() # this code is using subprocess.Popen() to call another script
singal.signal(signal.SIGINT, s)
Is this because SIG_IGN is some kind of 'magic' signal written in language itself? Or maybe there is a way to make similar suppression in my own handler?
After reading a bit of question on stack overflow I am bit confused. If someone could make clear for me why such difference in behaviour.
This is the specified POSIX behaviour of signals:
A child created via fork(2) inherits a copy of its parent's signal dis‐
positions. During an execve(2), the dispositions of handled signals
are reset to the default; the dispositions of ignored signals are left
unchanged.
When you execute (fork/execve) your another script in the first case, the SIGINT handler is reset to the default handler in the another script (default behaviour is to terminate the process) - of course, the another script could install its own handler and change this behaviour.
However, in the second case, you've configured SIGINT to be ignored. This behaviour will be propagated to the another script, as indicated in the definition above. Again, the another script could change this behaviour by installing its own handler.
So this has nothing to do with Python directly. It is the expected behaviour of the underlying operating system's POSIX signal handling implementation.
PS. If you're wondering what fork() and execve() are, fork() creates a copy of the running process (a child) and execve() replaces the current process with another. This is the underlying mechanism used by subprocess.Popen() to run the 'another script': first make a copy of the current process and then replace it with the target process.
This question already has answers here:
What is the correct way to make my PyQt application quit when killed from the console (Ctrl-C)?
(9 answers)
Closed 9 years ago.
Why doesn't Ctrl+C work to break a Python program that uses PyQt? I want to debug it and get a stack trace and for some reason, this is harder to do than with C++!
CTRL+C causes a signal to be sent to
the process. Python catches the
signal, and sets a global variable,
something like CTRL_C_PRESSED = True.
Then, whenever the Python interpreter
gets to execute a new opcode, it sees
the variable set and raises a
KeybordInterrupt.
This means that CTRL+C works only if
the Python interpreter is spinning. If
the interpreter is executing an
extension module written in C that
executes a long-running operation,
CTRL+C won't interrupt it, unless it
explicitly "cooperates" with Python.
Eg: time.sleep() is theoretically a
blocking operation, but the
implementation of that function
"cooperates" with the Python
interpreter to make CTRL+C work.
This is all by design: CTRL+C is meant
to do a "clean abort"; this is why it
gets turned into an exception by
Python (so that the cleanups are
executed during stack unwind), and its
support by extension modules is sort
of "opt-in". If you want to totally
abort the process, without giving it a
chance to cleanup, you can use CTRL+.
When Python calls QApplication::exec()
(the C++ function), Qt doesn't know
how to "cooperate" with Python for
CTRL+C, and this is why it does not
work. I don't think there's a good way
to "make it work"; you may want to see
if you can handle it through a global
event filter.
— Giovanni Bajo
Adding this to the main program solved the problem.
import signal
signal.signal(signal.SIGINT, signal.SIG_DFL)
I'm not sure what this has to do with the explanation.
I agree with Neil G, and would add this:
If you do not call QApplication.exec_() to start the event loop, and instead execute your program in an interactive python shell (using python -i), then pyqt will automatically process events whenever the interactive prompt is waiting, and Ctrl-C should again behave as expected. This is because the Qt event loop will be sharing time with the python interpreter, rather than running exclusively, allowing the interpreter a chance to catch those interrupts.
I'd like to put my cmd.com window into a mode where Control-C does not generate a SIGINT signal to Python (ActiveState if it matters).
I know I can use the signal module to handle SIGINT. The problem is that handling SIGINT is too late; by the time it is handled, it has already interrupted a system call.
I'd like something equivalent to the *nix "raw" mode. Just let the input queue up and when it is safe for my application to read it, it will.
Maddeningly enough, msvcrt.getch() seems to return Control-C as a character. But that only works while the program is blocked by getch() itself. If I am in another system call (sleep, just to use an example), I get the SIGINT.
You need to call the win32 API function SetConsoleCtrlHandler with NULL (0) as its first parameter and TRUE (1) as its second parameter. If you're already using pywin32, win32.SetConsoleCtrlHandler is fine for the purpose, otherwise ctypes should work, specifically via ctypes.windll.kernel32.SetConsoleCtrlHandler(0, 1)/