Singleton send signal to actual running process - python

I've developed a program in Python and pyGtk and today added the singleton feature, which doesn't allow to run it if it is already running. But now I want to go further and, if its running, somehow make it call self.window.present() to showit.
So I've been looking at Signals, PIPE, FIFO, MQ, Socket, etc. for three hours now! I don't know if I'm just not seeing it or what, but can't find the way to do this (even when lots of apps do it)
Now, the question would be: How do I send a "signal" to a running instance of the same script (which is not in an infinite bucle listening for it, but doing it's job), to make it call a function?
I'm trying sending signals, using:
os.kill(int(apid[0]),signal.SIGUSR1)
and receiving them with:
signal.signal(signal.SIGUSR1, self.handler)
def handler(signum, frame):
print 'Signal handler called with signal', signum
but it kills the running process with
Traceback (most recent call last):
File "./algest_new.py", line 4080, in <module>
gtk.main()
KeyboardInterrupt

The simple answer is, you don't. When you say you have implemented a "singleton feature" I'm not sure exactly what you mean. It seems almost as though you are expecting the code in the second process to be able to see the singleton object in the first one, which clearly isn't possible. But I may have misunderstood.
The usual way to do this is to create a file with a unique name at a known location, typically containing the process id of the running process. If you start your program and it sees the file already present it knows to explain to the user that there's a copy already running. You could also send a signal to that process (under Unix, anyway) to tell it to bring its window to the foreground.
Oh, and don't forget that your program should delete the PIDfile when it terminates :-)

Confusingly, gtk.main will raise the KeyboardInterrupt exception if the signal handler raises any exception. With this program:
import gtk
import signal
def ohno(*args):
raise Exception("Oh no")
signal.signal(signal.SIGUSR1, ohno)
gtk.main()
After launching, calling os.kill(pid, signal.SIGUSR1) from another process results in this exception:
File "signaltest.py", line 9, in <module>
gtk.main()
KeyboardInterrupt
This seems to be an issue with pygtk - an exception raised by a signal.signal handler in a non-gtk python app will do the expected thing and display the handler's exception (e.g. "Oh no").
So in short: if gtk.main is raising KeyboardInterrupt in response to other signals, check that your signal handlers aren't raising exceptions of their own.

Related

mpiexec + python + ^C: __del__ method not executed (and no traceback)

I have the following test_mpi.py python script:
from mpi4py import MPI
import time
class Foo:
def __init__(self):
print('Creation object.')
def __del__(self):
print('Object destruction.')
foo = Foo()
time.sleep(10)
If I execute it without recourse to mpiexec, using a simple python test_mpi.py, pressing CTRL+C after 5s, I get the following output:
ngreiner#Nathans-MacBook-Pro:~/Documents/scratch$ python test_mpi.py
Creation object.
^CTraceback (most recent call last):
File "test_mpi.py", line 26, in <module>
time.sleep(10)
KeyboardInterrupt
Object destruction.
ngreiner#Nathans-MacBook-Pro:~/Documents/scratch$
If I embed it within an mpiexec execution, using mpiexec -np 1 python test_mpi.py, again pressing CTRL+C after 5s, I now get:
ngreiner#Nathans-MacBook-Pro:~/Documents/scratch$ mpiexec -np 1 python test_mpi.py
Creation object.
^Cngreiner#Nathans-MacBook-Pro:~/Documents/scratch$
The traceback from python and the execution of the __del__ method have disappeared.
The main problem for me is the non-execution of the __del__ method, which is supposed to make some clean-up in my actual application.
Any idea how I could have the __del__ method executed when the Python execution is launched from mpiexec ?
Thank you very much in advance for the help,
(My system configuration: macOS High sierra 10.13.6, Python 3.7.4, open-mpi 4.0.1, mpi4py 3.0.2.)
After a bit of search, I found a solution to restore the printing of the traceback and the execution of the __del__ method when hitting ^C during mpiexec.
During a normal python execution (not launched by mpiexec, launched directly from the terminal), hitting ^C sends a SIGINT signal to python, which translates it into a KeyboardInterrupt exception (https://docs.python.org/3.7/library/signal.html).
But when hitting ^C during an mpiexec execution, it is the mpiexec process which receives the SIGINT signal, and instead of propagating it to its children processes (for instance python), it sends to its children processes a SIGTERM signal (https://www.open-mpi.org/doc/current/man1/mpirun.1.php).
It thus seems that python doesn't react similarly to SIGINT and SIGTERM signals.
The workaround I found is to use the signal module, and to use a specific handler for the SIGTERM signal, which simply raises a KeyboardInterrupt. This can be achieved by the following lines:
def sigterm_handler():
raise KeyboardInterrupt
import signal
signal.signal(signal.SIGTERM, sigterm_handler)
The former can be included at the top of the executed python script, or, to retain this behaviour each time python is used with mpiexec and with the mpi4py package, at the top of the __init__.py file of the mpi4py package.
This strategy may have side-effects (which I am unaware of) and should be used at your own risk.
Per documentation, it is not guaranteed that del would be called. So you are lucky that it is called on non-mpi program.
For simple case, you could use try/finally to be sure that finally section is executed.
Or, more generically, use context manager
Here is a quote from documentation that is important here:
It is not guaranteed that del() methods are called for objects that still exist when the interpreter exits.
The answer by ngreiner helped me, but at least with Python 2.7 and all Python 3 versions, the handler function needs two arguments. This modified code snippet with dummy arguments worked for me:
import signal
def sigterm_handler(signum, frame):
raise KeyboardInterrupt
signal.signal(signal.SIGTERM, sigterm_handler)

Python: Timeout Exception Handling with Signal.Alarm

I am trying to implement a timeout exception handler if a function call is taking too long.
EDIT: In fact, I am writing a Python script using subprocess, which calls an old C++ program with arguments. I know that the program hangs from time to time, not returning anything. That's why I am trying to put a time limit and to move on to next call with different argument and etc.
I've been searching and trying to implement it, but it doesn't quite work, so I wish to get some help. What I have so far is:
#! /usr/bin/env python
import signal
class TimeOutException(Exception):
def __init__(self, message, errors):
super(TimeOutException, self).__init__(message)
self.errors = errors
def signal_handler(signum, frame):
raise TimeOutException("Timeout!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(3)
try:
while True:
pass
except TimeOutException:
print "Timed out!"
signal.alarm(0)
EDIT: The Error message I receive currently is "TypeError: init() takes exactly 3 arguments (2 given)
Also, I would like ask a basic question regarding the except block. what's the role difference between the code right below "except TimeOutException" and the code in the "Exception handler"? It seems both can do the same thing?
Any help would be appreciated.
if a function call is taking too long
I realize that this might not be obvious for inexperienced developers, but the methods applicable for approaching this problem entirely depend on what you are doing in this "busy function", such as:
Is this a heavy computation? If yes, which Python interpreter are you using? CPython or PyPy? If CPython: does this computation only use Python bytecode or does it involve function calls outsourced to compiled machine code (which may hold Python's Global Interpreter Lock for quite an uncontrollable amount of time)?
Is this a lot of I/O work? If yes, can you abort this I/O work in an arbitrary state? Or do you need to properly clean up? Are you using a certain framework such as gevent or Twisted?
Edit:
So, it looks you are just spawning a subprocess and wait for it to terminate. Great, that is actually one of the most simple problems to implement a timeout control for. Python (3) ships a corresponding feature! :-) Have a look at
https://docs.python.org/3/library/subprocess.html#subprocess.call
The timeout argument is passed to Popen.wait(). If the timeout
expires, the child process will be killed and then waited for again.
The TimeoutExpired exception will be re-raised after the child process
has terminated.
Edit2:
Example code for you, save this to a file and execute it with Python 3.3, at least:
import subprocess
try:
subprocess.call(['python', '-c', 'print("hello")'], timeout=2)
except subprocess.TimeoutExpired as e:
print("%s was terminated as of timeout. Its output was:\n%s" % (e.cmd, e.output))
try:
subprocess.call(['python'], timeout=2)
except subprocess.TimeoutExpired as e:
print("%s was terminated as of timeout. Its output was:\n%s" % (e.cmd, e.output))
In the first case, the subprocess immediately returns. No timeout exception will be raised. In the second case, the timeout expires, and your controlling process (the process running above's script) will attempt to terminate the subprocess. This succeeds. After that, the subprocess.TimeoutExpired is raised and the exception handler deals with it. For me the output of the script above is ['python'] was terminated as of timeout. Its output was:
None:

Python - Function is unable to run in new thread

I'm trying to kill the notepad.exe process on windows using this function:
import thread, wmi, os
print 'CMD: Kill command called'
def kill():
c = wmi.WMI ()
Commands=['notepad.exe']
if Commands[0]!='All':
print 'CMD: Killing: ',Commands[0]
for process in c.Win32_Process ():
if process.Name==Commands[0]:
process.Terminate()
else:
print 'CMD: trying to kill all processes'
for process in c.Win32_Process ():
if process.executablepath!=inspect.getfile(inspect.currentframe()):
try:
process.Terminate()
except:
print 'CMD: Unable to kill: ',proc.name
kill() #Works
thread.start_new_thread( kill, () ) #Not working
It works like a charm when I'm calling the function like this:
kill()
But when running the function in a new thread it crashes and I have no idea why.
import thread, wmi, os
import pythoncom
print 'CMD: Kill command called'
def kill():
pythoncom.CoInitialize()
. . .
Running Windows functions in threads can be tricky since it often involves COM objects. Using pythoncom.CoInitialize() usually allows you do it. Also, you may want to take a look at the threading library. It's much easier to deal with than thread.
There are a couple of problems (EDIT: The second problem has been addressed since starting my answer, by "MikeHunter", so I will skip that):
Firstly, your program ends right after starting the thread, taking the thread with it. I will assume this is not a problem long-term because presumably this is going to be part of something bigger. To get around that, you can simulate something else keeping the program going by just adding a time.sleep() call at the end of the script with, say, 5 seconds as the sleep length.
This will allow the program to give us a useful error, which in your case is:
CMD: Kill command called
Unhandled exception in thread started by <function kill at 0x0223CF30>
Traceback (most recent call last):
File "killnotepad.py", line 4, in kill
c = wmi.WMI ()
File "C:\Python27\lib\site-packages\wmi.py", line 1293, in connect
raise x_wmi_uninitialised_thread ("WMI returned a syntax error: you're probably running inside a thread without first calling pythoncom.CoInitialize[Ex]")
wmi.x_wmi_uninitialised_thread: <x_wmi: WMI returned a syntax error: you're probably running inside a thread without first calling pythoncom.CoInitialize[Ex] (no underlying exception)>
As you can see, this reveals the real problem and leads us to the solution posted by MikeHunter.

Fun with Signal Handlers in Python Interrupting Select

I'm working on a programming project--writing a basic P2P filesharing application in Python. I'm using two threads: a main one to call select and wait for input from a list of sockets and sys.stdin (to receive typed commands) and a helper thread that takes status update messages off a queue and prints them. (It is the only thing that prints anything)
I'm also required to catch the standard SIGINT and handle it to exit gracefully. I have a quit method that does this; typing 'quit' as a command works just fine. So in the main thread I try setting this method as the handler for SIGINT. As far as I can tell, the process catches the signal and calls the quit method. The helper thread prints a message confirming that it is exiting. But then I get the following error message from the main thread:
Traceback (most recent call last):
File "peer.py", line 226, in <module>
main()
File "peer.py", line 223, in main
p.run()
File "peer.py", line 160, in run
readables, writables, exceptions = select(self.sockets, [], [])
select.error: (4, 'Interrupted system call')
After which the program does still exit. Whereas without the signal handler in place, sending a SIGINT gives me the following:
Traceback (most recent call last):
File "peer.py", line 225, in <module>
main()
File "peer.py", line 222, in main
p.run()
File "peer.py", line 159, in run
readables, writables, exceptions = select(self.sockets, [], [])
KeyboardInterrupt
Which fails to terminate the program; I have to stop and kill it. This is confusing because the SIGINT appears to interrupt the call to select only when it is caught by my custom method. (Which only puts a message on the print queue and sets a "done" variable) Does anyone know how this can be happening? Is it just a bad idea trying to use signal handlers and threads simultaneously?
I'm not sure about using signal handlers to catch this case, but I've found a recipe for handling this case on *nix based systems here: http://code.activestate.com/recipes/496735-workaround-for-missed-sigint-in-multithreaded-prog/
In a nutshell (If I undertand correctly):
Before you start any new threads, fork a child process (using os.fork) to finish the program run, and have the parent process watch for the KeyboardInterrupt.
When the parent catches the keyboard interrupt, you can kill the child process (which by now may have started other threads) using os.kill. This will, in turn, terminate any threads of that child process.
Yes, last night after I stopped working on it I realized that I did want it to interrupt. It was being interrupted by executing the signal handler, presumably. So I just catch the select.error and have it jump to the end of the loop, where it immediately exits and moves on to the cleanup code.

Installing signal handler with Python

(there is a follow up to this question here)
I am working on trying to write a Python based Init system for Linux but I'm having an issue getting signals to my Python init script. From the 'man 2 kill' page:
The only signals that can be sent to process ID 1, the init process, are those for which init has explicitly installed signal handlers.
In my Python based Init, I have a test function and a signal handler setup to call that function:
def SigTest(SIG, FRM):
print "Caught SIGHUP!"
signal.signal(signal.SIGHUP, SigTest)
From another TTY (the init script executes sh on another tty) if I send a signal, it is completely ignored and the text is never printed. kill -HUP 1
I found this issue because I wrote a reaping function for my Python init to reap its child processes as they die, but they all just zombied, it took awhile to figure out Python was never getting the SIGCHLD signal. Just to ensure my environment is sane, I wrote a C program to fork and have the child send PID 1 a signal and it did register.
How do I install a signal handler the system will acknowledge if signal.signal(SIG, FUNC) isn't working?
Im going to try using ctypes to register my handler with C code and see if that works, but I rather a pure Python answer if at all possible.
Ideas?
( I'm not a programmer, Im really in over my head here :p )
Test code below...
import os
import sys
import time
import signal
def SigTest(SIG, FRM):
print "SIGINT Caught"
print "forking for ash"
cpid = os.fork()
if cpid == 0:
os.closerange(0, 4)
sys.stdin = open('/dev/tty2', 'r')
sys.stdout = open('/dev/tty2', 'w')
sys.stderr = open('/dev/tty2', 'w')
os.execv('/bin/ash', ('ash',))
print "ash started on tty2"
signal.signal(signal.SIGHUP, SigTest)
while True:
time.sleep(5.0)
Signal handlers mostly work in Python. But there are some problems. One is that your handler won't run until the interpreter re-enters it's bytecode interpreter. if your program is blocked in a C function the signal handler is not called until it returns. You don't show the code where you are waiting. Are you using signal.pause()?
Another is that if you are in a system call you will get an exception after the singal handler returns. You need to wrap all system calls with a retry handler (at least on Linux).
It's interesting that you are writing an init replacement... That's something like a process manager. The proctools code might interest you, since it does handle SIGCHLD.
By the way, this code:
import signal
def SigTest(SIG, FRM):
print "SIGINT Caught"
signal.signal(signal.SIGHUP, SigTest)
while True:
signal.pause()
Does work on my system.

Categories