I am trying to implement a timeout exception handler if a function call is taking too long.
EDIT: In fact, I am writing a Python script using subprocess, which calls an old C++ program with arguments. I know that the program hangs from time to time, not returning anything. That's why I am trying to put a time limit and to move on to next call with different argument and etc.
I've been searching and trying to implement it, but it doesn't quite work, so I wish to get some help. What I have so far is:
#! /usr/bin/env python
import signal
class TimeOutException(Exception):
def __init__(self, message, errors):
super(TimeOutException, self).__init__(message)
self.errors = errors
def signal_handler(signum, frame):
raise TimeOutException("Timeout!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(3)
try:
while True:
pass
except TimeOutException:
print "Timed out!"
signal.alarm(0)
EDIT: The Error message I receive currently is "TypeError: init() takes exactly 3 arguments (2 given)
Also, I would like ask a basic question regarding the except block. what's the role difference between the code right below "except TimeOutException" and the code in the "Exception handler"? It seems both can do the same thing?
Any help would be appreciated.
if a function call is taking too long
I realize that this might not be obvious for inexperienced developers, but the methods applicable for approaching this problem entirely depend on what you are doing in this "busy function", such as:
Is this a heavy computation? If yes, which Python interpreter are you using? CPython or PyPy? If CPython: does this computation only use Python bytecode or does it involve function calls outsourced to compiled machine code (which may hold Python's Global Interpreter Lock for quite an uncontrollable amount of time)?
Is this a lot of I/O work? If yes, can you abort this I/O work in an arbitrary state? Or do you need to properly clean up? Are you using a certain framework such as gevent or Twisted?
Edit:
So, it looks you are just spawning a subprocess and wait for it to terminate. Great, that is actually one of the most simple problems to implement a timeout control for. Python (3) ships a corresponding feature! :-) Have a look at
https://docs.python.org/3/library/subprocess.html#subprocess.call
The timeout argument is passed to Popen.wait(). If the timeout
expires, the child process will be killed and then waited for again.
The TimeoutExpired exception will be re-raised after the child process
has terminated.
Edit2:
Example code for you, save this to a file and execute it with Python 3.3, at least:
import subprocess
try:
subprocess.call(['python', '-c', 'print("hello")'], timeout=2)
except subprocess.TimeoutExpired as e:
print("%s was terminated as of timeout. Its output was:\n%s" % (e.cmd, e.output))
try:
subprocess.call(['python'], timeout=2)
except subprocess.TimeoutExpired as e:
print("%s was terminated as of timeout. Its output was:\n%s" % (e.cmd, e.output))
In the first case, the subprocess immediately returns. No timeout exception will be raised. In the second case, the timeout expires, and your controlling process (the process running above's script) will attempt to terminate the subprocess. This succeeds. After that, the subprocess.TimeoutExpired is raised and the exception handler deals with it. For me the output of the script above is ['python'] was terminated as of timeout. Its output was:
None:
Related
I have a compiled program I launch using python sh as a background process. I want to run it for 20 seconds, then kill it. I always get an exception I can't catch. The code looks like
cmd = sh.Command('./rtlogger')
try:
p = cmd('config.txt', _bg=True, _out='/dev/null', _err='/dev/null', _timeout=20)
p.wait()
except sh.TimeoutException:
print('caught timeout')
I have also tried to use p.kill() and p.terminate() after catching the timeout exception. I see a stack trace that ends in SignalException_SIGKILL. I can't seem to catch that. The stack trace references none of my code. Also, the text comes to the screen even though I'm routing stdout and stderr to /dev/null.
The program seems to run OK. The logger collects the data but I want eliminate or catch the exception. Any advice appreciated.
_timeout for the original invocation only applies when the command is run synchronously, in the foreground. When you run a command asynchronously, in the background, with _bg=True, you need to pass timeout to the wait call instead, e.g.:
cmd = sh.Command('./rtlogger')
try:
p = cmd('config.txt', _bg=True, _out='/dev/null', _err='/dev/null')
p.wait(timeout=20)
except sh.TimeoutException:
print('caught timeout')
Of course, in this case, you're not taking advantage of it being in the background (no work is done between launch and wait), so you may as well run it in the foreground and leave the _timeout on the invocation:
cmd = sh.Command('./rtlogger')
try:
p = cmd('config.txt', _out='/dev/null', _err='/dev/null', _timeout=20)
except sh.TimeoutException:
print('caught timeout')
You don't need to explicitly kill or terminate the child process; the _timeout_signal argument is used to signal the child on timeout (defaulting to signal.SIGKILL). You can change it to another signal if SIGKILL is not what you desire, but you don't need to call kill/terminate yourself either way; the act of timing out sends the signal for you.
I have the below code that I am running
try:
child = pexpect.spawn(
('some command --path {0} somethingmore --args {1}').format(
<iterator-output>,something),
timeout=300)
child.logfile = open(file_name,'w')
child.expect('x*')
child.sendline(something)
child.expect('E*')
child.sendline(something))
#child.read()
child.interact()
time.sleep(15)
print child.status
except Exception as e:
print "Exception in child process"
print str(e)
Now, the command in pexpect creates subprocess by taking the one of the input from a loop, now everytime it spins up a subprocess I try to capture the logs via the child.read, in this case it waits for that subprocess to complete before going to the loop again, how do I make it to keep running it in the background(I get the logs of command input/output that I enter dynamically, but not of the process that runs thereafter unless I use the read or interact? I used this How do I make a command to run in background using pexpect.spawn? but it uses interact which again waits for that subprocess to complete .. since the loop will be iterated alomst more than 100 times I cannot wait on one to complete before moving to other, as the command in pexpect is an AWS lambda call, all I need to make sure is the command is triggered but I am not able to capture the process output of that call without waiting for it to complete.... Please let me know your suggestions
If you don't actually want to interact with lots of processes in parallel, but instead want to interact with each process briefly, then just ignore it while it runs and move onto interacting with the next one…
# Do everything up to the final `interact`. After that, the child
# won't be writing to us anymore, but it will still be running for
# many seconds. So, return the child object so we can deal with it
# later, after we've started up all the other children.
def start_command(path, arg):
try:
child = pexpect.spawn(('some command --path {0} somethingmore --args {1}').format(path, arg), timeout=300)
child.logfile = open(file_name,'w')
child.expect('x*')
child.sendline(something)
child.expect('E*')
child.sendline(something))
# child.read()
child.interact()
return child
except Exception as e:
print "Exception in child process"
print str(e)
# First, start up all the children and do the initial interaction
# with each one.
children = []
for path, args in some_iterable:
children.append(start_command(path, args))
# Now we just need to wait until they're all done. This will get
# them in as-launched order, rather than as-completed, but that
# seems like it should be fine for your use case.
for child in children:
try:
child.wait()
print child.status
except Exception as e:
print "Exception in child process"
print str(e)
A few things:
Notice from the code comments that I'm assuming the child isn't writing anything to us (and waiting for us to read it) after the initial interaction. If that's not true, things are a bit more complicated.
If you want to not only do this, but also spin up 8 children at a time, or even all of them at once, you can (as shown in my other answer) use an executor or just a mess of threads for the initial start_command calls, and have those tasks/threads return the child object to be waited on later. For example, with the Executor version, each future's result() will be a pexpect child process. However, you definitely need to read the pexpect docs on threads in that case—with some versions of linux, passing child-process objects between threads can break the objects.
Finally, since you will now be seeing things much more out-of-order than the original version, you might want to change your print statements to show which child you're printing for (which also probably means changing children from a list of children to a list of (child, path, arg) tuples or the like).
If you want to run a process in the background, but at the same time interact with it, the simplest solution is to just kick off a thread to interact with the process.*
In your case, it sounds like you're running hundreds of processes, so you want to run some of them in parallel, but maybe not all of them at once? If so, you should use a thread pool or executor. For example, using concurrent.futures from the stdlib (or pip install the futures backport if your Python is too old):
def run_command(path, arg):
try:
child = pexpect.spawn(('some command --path {0} somethingmore --args {1}').format(path, arg), timeout=300)
child.logfile = open(file_name,'w')
child.expect('x*')
child.sendline(something)
child.expect('E*')
child.sendline(something))
# child.read()
child.interact()
time.sleep(15)
print child.status
except Exception as e:
print "Exception in child process"
print str(e)
with concurrent.futures.ThreadPoolExecutor(max_workers=8) as x:
fs = []
for path, arg in some_iterable:
fs.append(x.submit(run_command, path, arg))
concurrent.futures.wait(fs)
If you need to return a value (or raise an exception) from the threaded code, you'll probably want a loop over as_completed(fs) instead of just plain wait. But here, you just seem to be printing stuff out and then forgetting it.
If the path, arg really do come straight out of an iterable like this, it's usually simpler to use x.map(run_command, some_iterable).
All of this (and other options, too) is explained pretty nicely in the module docs.
Also see the pexpect FAQ and common problems. I don't think there are any issues that will affect you here in current versions (we're always spawning the child and interacting with it entirely in a single thread-pooled task), but I vaguely remember there used to be an additional problem in the past (something to do with signals?).
* I think asyncio would be a better solution, except that as far as I know none of the attempts to fork or reimplement pexpect in a nonblocking way are complete enough to actually use…
I've developed a program in Python and pyGtk and today added the singleton feature, which doesn't allow to run it if it is already running. But now I want to go further and, if its running, somehow make it call self.window.present() to showit.
So I've been looking at Signals, PIPE, FIFO, MQ, Socket, etc. for three hours now! I don't know if I'm just not seeing it or what, but can't find the way to do this (even when lots of apps do it)
Now, the question would be: How do I send a "signal" to a running instance of the same script (which is not in an infinite bucle listening for it, but doing it's job), to make it call a function?
I'm trying sending signals, using:
os.kill(int(apid[0]),signal.SIGUSR1)
and receiving them with:
signal.signal(signal.SIGUSR1, self.handler)
def handler(signum, frame):
print 'Signal handler called with signal', signum
but it kills the running process with
Traceback (most recent call last):
File "./algest_new.py", line 4080, in <module>
gtk.main()
KeyboardInterrupt
The simple answer is, you don't. When you say you have implemented a "singleton feature" I'm not sure exactly what you mean. It seems almost as though you are expecting the code in the second process to be able to see the singleton object in the first one, which clearly isn't possible. But I may have misunderstood.
The usual way to do this is to create a file with a unique name at a known location, typically containing the process id of the running process. If you start your program and it sees the file already present it knows to explain to the user that there's a copy already running. You could also send a signal to that process (under Unix, anyway) to tell it to bring its window to the foreground.
Oh, and don't forget that your program should delete the PIDfile when it terminates :-)
Confusingly, gtk.main will raise the KeyboardInterrupt exception if the signal handler raises any exception. With this program:
import gtk
import signal
def ohno(*args):
raise Exception("Oh no")
signal.signal(signal.SIGUSR1, ohno)
gtk.main()
After launching, calling os.kill(pid, signal.SIGUSR1) from another process results in this exception:
File "signaltest.py", line 9, in <module>
gtk.main()
KeyboardInterrupt
This seems to be an issue with pygtk - an exception raised by a signal.signal handler in a non-gtk python app will do the expected thing and display the handler's exception (e.g. "Oh no").
So in short: if gtk.main is raising KeyboardInterrupt in response to other signals, check that your signal handlers aren't raising exceptions of their own.
I'm trying to kill the notepad.exe process on windows using this function:
import thread, wmi, os
print 'CMD: Kill command called'
def kill():
c = wmi.WMI ()
Commands=['notepad.exe']
if Commands[0]!='All':
print 'CMD: Killing: ',Commands[0]
for process in c.Win32_Process ():
if process.Name==Commands[0]:
process.Terminate()
else:
print 'CMD: trying to kill all processes'
for process in c.Win32_Process ():
if process.executablepath!=inspect.getfile(inspect.currentframe()):
try:
process.Terminate()
except:
print 'CMD: Unable to kill: ',proc.name
kill() #Works
thread.start_new_thread( kill, () ) #Not working
It works like a charm when I'm calling the function like this:
kill()
But when running the function in a new thread it crashes and I have no idea why.
import thread, wmi, os
import pythoncom
print 'CMD: Kill command called'
def kill():
pythoncom.CoInitialize()
. . .
Running Windows functions in threads can be tricky since it often involves COM objects. Using pythoncom.CoInitialize() usually allows you do it. Also, you may want to take a look at the threading library. It's much easier to deal with than thread.
There are a couple of problems (EDIT: The second problem has been addressed since starting my answer, by "MikeHunter", so I will skip that):
Firstly, your program ends right after starting the thread, taking the thread with it. I will assume this is not a problem long-term because presumably this is going to be part of something bigger. To get around that, you can simulate something else keeping the program going by just adding a time.sleep() call at the end of the script with, say, 5 seconds as the sleep length.
This will allow the program to give us a useful error, which in your case is:
CMD: Kill command called
Unhandled exception in thread started by <function kill at 0x0223CF30>
Traceback (most recent call last):
File "killnotepad.py", line 4, in kill
c = wmi.WMI ()
File "C:\Python27\lib\site-packages\wmi.py", line 1293, in connect
raise x_wmi_uninitialised_thread ("WMI returned a syntax error: you're probably running inside a thread without first calling pythoncom.CoInitialize[Ex]")
wmi.x_wmi_uninitialised_thread: <x_wmi: WMI returned a syntax error: you're probably running inside a thread without first calling pythoncom.CoInitialize[Ex] (no underlying exception)>
As you can see, this reveals the real problem and leads us to the solution posted by MikeHunter.
I have a python script, which I daemonise using this code
def daemonise():
from os import fork, setsid, umask, dup2
from sys import stdin, stdout, stderr
if fork(): exit(0)
umask(0)
setsid()
if fork(): exit(0)
stdout.flush()
stderr.flush()
si = file('/dev/null', 'r')
so = file('daemon-%s.out'%os.getpid(), 'a+')
se = file('daemon-%s.err'%os.getpid(), 'a+')
dup2(si.fileno(), stdin.fileno())
dup2(so.fileno(), stdout.fileno())
dup2(se.fileno(), stderr.fileno())
print 'this file has the output from daemon%s'%os.getpid()
print >> stderr, 'this file has the errors from daemon%s'%os.getpid()
The script is in
while True: try: funny_code(); sleep(10); except:pass;
loop. It runs fine for a few hours and then dies unexpectedly. How do I go about debugging such demons, err daemons.
[Edit]
Without starting a process like monit, is there a way to write a watchdog in python, which can watch my other daemons and restart when they go down? (Who watches the watchdog.)
You really should use python-daemon for this which is a library that implements PEP 3141 for a standard daemon process library. This way you will ensure that your application does all the right things for whichever type of UNIX it is running under. No need to reinvent the wheel.
Why are you silently swallowing all exceptions? Try to see what exceptions are being caught by this:
while True:
try:
funny_code()
sleep(10)
except BaseException, e:
print e.__class__, e.message
pass
Something unexpected might be happening which is causing it to fail, but you'll never know if you blindly ignore all the exceptions.
I recommend using supervisord (written in Python, very easy to use) for daemonizing and monitoring processes. Running under supervisord you would not have to use your daemonise function.
What I've used in my clients is daemontools. It is a proven, well tested tool to run anything daemonized.
You just write your application without any daemonization, to run on foreground; Then create a daemontools service folder for it, and it will discover and automatically restart your application from now on, and every time the system restarts.
It can also handle log rotation and stuff. Saves a lot of tedious, repeated work.