Related
There is a socket related function call in my code, that function is from another module thus out of my control, the problem is that it blocks for hours occasionally, which is totally unacceptable, How can I limit the function execution time from my code? I guess the solution must utilize another thread.
An improvement on #rik.the.vik's answer would be to use the with statement to give the timeout function some syntactic sugar:
import signal
from contextlib import contextmanager
class TimeoutException(Exception): pass
#contextmanager
def time_limit(seconds):
def signal_handler(signum, frame):
raise TimeoutException("Timed out!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds)
try:
yield
finally:
signal.alarm(0)
try:
with time_limit(10):
long_function_call()
except TimeoutException as e:
print("Timed out!")
I'm not sure how cross-platform this might be, but using signals and alarm might be a good way of looking at this. With a little work you could make this completely generic as well and usable in any situation.
http://docs.python.org/library/signal.html
So your code is going to look something like this.
import signal
def signal_handler(signum, frame):
raise Exception("Timed out!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(10) # Ten seconds
try:
long_function_call()
except Exception, msg:
print "Timed out!"
Here's a Linux/OSX way to limit a function's running time. This is in case you don't want to use threads, and want your program to wait until the function ends, or the time limit expires.
from multiprocessing import Process
from time import sleep
def f(time):
sleep(time)
def run_with_limited_time(func, args, kwargs, time):
"""Runs a function with time limit
:param func: The function to run
:param args: The functions args, given as tuple
:param kwargs: The functions keywords, given as dict
:param time: The time limit in seconds
:return: True if the function ended successfully. False if it was terminated.
"""
p = Process(target=func, args=args, kwargs=kwargs)
p.start()
p.join(time)
if p.is_alive():
p.terminate()
return False
return True
if __name__ == '__main__':
print run_with_limited_time(f, (1.5, ), {}, 2.5) # True
print run_with_limited_time(f, (3.5, ), {}, 2.5) # False
I prefer a context manager approach because it allows the execution of multiple python statements within a with time_limit statement. Because windows system does not have SIGALARM, a more portable and perhaps more straightforward method could be using a Timer
from contextlib import contextmanager
import threading
import _thread
class TimeoutException(Exception):
def __init__(self, msg=''):
self.msg = msg
#contextmanager
def time_limit(seconds, msg=''):
timer = threading.Timer(seconds, lambda: _thread.interrupt_main())
timer.start()
try:
yield
except KeyboardInterrupt:
raise TimeoutException("Timed out for operation {}".format(msg))
finally:
# if the action ends in specified time, timer is canceled
timer.cancel()
import time
# ends after 5 seconds
with time_limit(5, 'sleep'):
for i in range(10):
time.sleep(1)
# this will actually end after 10 seconds
with time_limit(5, 'sleep'):
time.sleep(10)
The key technique here is the use of _thread.interrupt_main to interrupt the main thread from the timer thread. One caveat is that the main thread does not always respond to the KeyboardInterrupt raised by the Timer quickly. For example, time.sleep() calls a system function so a KeyboardInterrupt will be handled after the sleep call.
Here: a simple way of getting the desired effect:
https://pypi.org/project/func-timeout
This saved my life.
And now an example on how it works: lets say you have a huge list of items to be processed and you are iterating your function over those items. However, for some strange reason, your function get stuck on item n, without raising an exception. You need to other items to be processed, the more the better. In this case, you can set a timeout for processing each item:
import time
import func_timeout
def my_function(n):
"""Sleep for n seconds and return n squared."""
print(f'Processing {n}')
time.sleep(n)
return n**2
def main_controller(max_wait_time, all_data):
"""
Feed my_function with a list of itens to process (all_data).
However, if max_wait_time is exceeded, return the item and a fail info.
"""
res = []
for data in all_data:
try:
my_square = func_timeout.func_timeout(
max_wait_time, my_function, args=[data]
)
res.append((my_square, 'processed'))
except func_timeout.FunctionTimedOut:
print('error')
res.append((data, 'fail'))
continue
return res
timeout_time = 2.1 # my time limit
all_data = range(1, 10) # the data to be processed
res = main_controller(timeout_time, all_data)
print(res)
Doing this from within a signal handler is dangerous: you might be inside an exception handler at the time the exception is raised, and leave things in a broken state. For example,
def function_with_enforced_timeout():
f = open_temporary_file()
try:
...
finally:
here()
unlink(f.filename)
If your exception is raised here(), the temporary file will never be deleted.
The solution here is for asynchronous exceptions to be postponed until the code is not inside exception-handling code (an except or finally block), but Python doesn't do that.
Note that this won't interrupt anything while executing native code; it'll only interrupt it when the function returns, so this may not help this particular case. (SIGALRM itself might interrupt the call that's blocking--but socket code typically simply retries after an EINTR.)
Doing this with threads is a better idea, since it's more portable than signals. Since you're starting a worker thread and blocking until it finishes, there are none of the usual concurrency worries. Unfortunately, there's no way to deliver an exception asynchronously to another thread in Python (other thread APIs can do this). It'll also have the same issue with sending an exception during an exception handler, and require the same fix.
You don't have to use threads. You can use another process to do the blocking work, for instance, maybe using the subprocess module. If you want to share data structures between different parts of your program then Twisted is a great library for giving yourself control of this, and I'd recommend it if you care about blocking and expect to have this trouble a lot. The bad news with Twisted is you have to rewrite your code to avoid any blocking, and there is a fair learning curve.
You can use threads to avoid blocking, but I'd regard this as a last resort, since it exposes you to a whole world of pain. Read a good book on concurrency before even thinking about using threads in production, e.g. Jean Bacon's "Concurrent Systems". I work with a bunch of people who do really cool high performance stuff with threads, and we don't introduce threads into projects unless we really need them.
The only "safe" way to do this, in any language, is to use a secondary process to do that timeout-thing, otherwise you need to build your code in such a way that it will time out safely by itself, for instance by checking the time elapsed in a loop or similar. If changing the method isn't an option, a thread will not suffice.
Why? Because you're risking leaving things in a bad state when you do. If the thread is simply killed mid-method, locks being held, etc. will just be held, and cannot be released.
So look at the process way, do not look at the thread way.
I would usually prefer using a contextmanager as suggested by #josh-lee
But in case someone is interested in having this implemented as a decorator, here's an alternative.
Here's how it would look like:
import time
from timeout import timeout
class Test(object):
#timeout(2)
def test_a(self, foo, bar):
print foo
time.sleep(1)
print bar
return 'A Done'
#timeout(2)
def test_b(self, foo, bar):
print foo
time.sleep(3)
print bar
return 'B Done'
t = Test()
print t.test_a('python', 'rocks')
print t.test_b('timing', 'out')
And this is the timeout.py module:
import threading
class TimeoutError(Exception):
pass
class InterruptableThread(threading.Thread):
def __init__(self, func, *args, **kwargs):
threading.Thread.__init__(self)
self._func = func
self._args = args
self._kwargs = kwargs
self._result = None
def run(self):
self._result = self._func(*self._args, **self._kwargs)
#property
def result(self):
return self._result
class timeout(object):
def __init__(self, sec):
self._sec = sec
def __call__(self, f):
def wrapped_f(*args, **kwargs):
it = InterruptableThread(f, *args, **kwargs)
it.start()
it.join(self._sec)
if not it.is_alive():
return it.result
raise TimeoutError('execution expired')
return wrapped_f
The output:
python
rocks
A Done
timing
Traceback (most recent call last):
...
timeout.TimeoutError: execution expired
out
Notice that even if the TimeoutError is thrown, the decorated method will continue to run in a different thread. If you would also want this thread to be "stopped" see: Is there any way to kill a Thread in Python?
Using simple decorator
Here's the version I made after studying above answers. Pretty straight forward.
def function_timeout(seconds: int):
"""Wrapper of Decorator to pass arguments"""
def decorator(func):
#contextmanager
def time_limit(seconds_):
def signal_handler(signum, frame): # noqa
raise TimeoutException(f"Timed out in {seconds_} seconds!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds_)
try:
yield
finally:
signal.alarm(0)
#wraps(func)
def wrapper(*args, **kwargs):
with time_limit(seconds):
return func(*args, **kwargs)
return wrapper
return decorator
How to use?
#function_timeout(seconds=5)
def my_naughty_function():
while True:
print("Try to stop me ;-p")
Well of course, don't forget to import the function if it is in a separate file.
Here's a timeout function I think I found via google and it works for me.
From:
http://code.activestate.com/recipes/473878/
def timeout(func, args=(), kwargs={}, timeout_duration=1, default=None):
'''This function will spwan a thread and run the given function using the args, kwargs and
return the given default value if the timeout_duration is exceeded
'''
import threading
class InterruptableThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.result = default
def run(self):
try:
self.result = func(*args, **kwargs)
except:
self.result = default
it = InterruptableThread()
it.start()
it.join(timeout_duration)
if it.isAlive():
return it.result
else:
return it.result
The method from #user2283347 is tested working, but we want to get rid of the traceback messages. Use pass trick from Remove traceback in Python on Ctrl-C, the modified code is:
from contextlib import contextmanager
import threading
import _thread
class TimeoutException(Exception): pass
#contextmanager
def time_limit(seconds):
timer = threading.Timer(seconds, lambda: _thread.interrupt_main())
timer.start()
try:
yield
except KeyboardInterrupt:
pass
finally:
# if the action ends in specified time, timer is canceled
timer.cancel()
def timeout_svm_score(i):
#from sklearn import svm
#import numpy as np
#from IPython.core.display import display
#%store -r names X Y
clf = svm.SVC(kernel='linear', C=1).fit(np.nan_to_num(X[[names[i]]]), Y)
score = clf.score(np.nan_to_num(X[[names[i]]]),Y)
#scoressvm.append((score, names[i]))
display((score, names[i]))
%%time
with time_limit(5):
i=0
timeout_svm_score(i)
#Wall time: 14.2 s
%%time
with time_limit(20):
i=0
timeout_svm_score(i)
#(0.04541284403669725, '计划飞行时间')
#Wall time: 16.1 s
%%time
with time_limit(5):
i=14
timeout_svm_score(i)
#Wall time: 5h 43min 41s
We can see that this method may need far long time to interrupt the calculation, we asked for 5 seconds, but it work out in 5 hours.
This code works for Windows Server Datacenter 2016 with python 3.7.3 and I didn't tested on Unix, after mixing some answers from Google and StackOverflow, it finally worked for me like this:
from multiprocessing import Process, Lock
import time
import os
def f(lock,id,sleepTime):
lock.acquire()
print("I'm P"+str(id)+" Process ID: "+str(os.getpid()))
lock.release()
time.sleep(sleepTime) #sleeps for some time
print("Process: "+str(id)+" took this much time:"+str(sleepTime))
time.sleep(sleepTime)
print("Process: "+str(id)+" took this much time:"+str(sleepTime*2))
if __name__ == '__main__':
timeout_function=float(9) # 9 seconds for max function time
print("Main Process ID: "+str(os.getpid()))
lock=Lock()
p1=Process(target=f, args=(lock,1,6,)) #Here you can change from 6 to 3 for instance, so you can watch the behavior
start=time.time()
print(type(start))
p1.start()
if p1.is_alive():
print("process running a")
else:
print("process not running a")
while p1.is_alive():
timeout=time.time()
if timeout-start > timeout_function:
p1.terminate()
print("process terminated")
print("watching, time passed: "+str(timeout-start) )
time.sleep(1)
if p1.is_alive():
print("process running b")
else:
print("process not running b")
p1.join()
if p1.is_alive():
print("process running c")
else:
print("process not running c")
end=time.time()
print("I am the main process, the two processes are done")
print("Time taken:- "+str(end-start)+" secs") #MainProcess terminates at approx ~ 5 secs.
time.sleep(5) # To see if on Task Manager the child process is really being terminated, and it is
print("finishing")
The main code is from this link:
Create two child process using python(windows)
Then I used .terminate() to kill the child process. You can see that the function f calls 2 prints, one after 5 seconds and another after 10 seconds. However, with a 7 seconds sleep and the terminate(), it does not show the last print.
It worked for me, hope it helps!
As my project heavily relies on asynchronous network I/O, I always have to expect some weird network error to occur: whether it is the service I'm connecting to having an API outage, or my own server having a network issue, or something else. Issues like that appear, and there's no real way around it. So, I eventually ended up trying to figure out a way to effectively "pause" a coroutine's execution from outside whenever such a network issue occured, until the connection has been reestablished. My approach is writing a decorator pausable that takes an argument pause which is a coroutine function that will be yielded from / awaited like this:
def pausable(pause, resume_check=None, delay_start=None):
if not asyncio.iscoroutinefunction(pause):
raise TypeError("pause must be a coroutine function")
if not (delay_start is None or asyncio.iscoroutinefunction(delay_start)):
raise TypeError("delay_start must be a coroutine function")
def wrapper(coro):
#asyncio.coroutine
def wrapped(*args, **kwargs):
if delay_start is not None:
yield from delay_start()
for x in coro(*args, **kwargs):
try:
yield from pause()
yield x
# catch exceptions the regular discord.py user might not catch
except (asyncio.CancelledError,
aiohttp.ClientError,
websockets.WebSocketProtocolError,
ConnectionClosed,
# bunch of other network errors
) as ex:
if any((resume_check() if resume_check is not None else False and
isinstance(ex, asyncio.CancelledError),
# clean disconnect
isinstance(ex, ConnectionClosed) and ex.code == 1000,
# connection issue
not isinstance(ex, ConnectionClosed))):
yield from pause()
yield x
else:
raise
return wrapped
return wrapper
Pay special attention to this bit:
for x in coro(*args, **kwargs):
yield from pause()
yield x
Example usage (ready is an asyncio.Event):
#pausable(ready.wait, resume_check=restarting_enabled, delay_start=ready.wait)
#asyncio.coroutine
def send_test_every_minute():
while True:
yield from client.send("Test")
yield from asyncio.sleep(60)
However, this does not seem to work and it does not seem like an elegant solution to me. Is there a working solution that is compatible with Python 3.5.3 and above? Compatibility with Python 3.4.4 and above is desirable.
Addendum
Just try/excepting the exceptions raised in the coroutine that needs to be paused is neither always possible nor a viable option to me as it heavily violates against a core code design principle (DRY) I'd like to comply with; in other words, excepting so many exceptions in so many coroutine functions would make my code messy.
Few words about current solution.
for x in coro(*args, **kwargs):
try:
yield from pause()
yield x
except
...
You won't be able to catch exceptions this way:
exception raises outside of for-loop
generator is exhausted (not usable) after first exception anyway
.
#asyncio.coroutine
def test():
yield from asyncio.sleep(1)
raise RuntimeError()
yield from asyncio.sleep(1)
print('ok')
#asyncio.coroutine
def main():
coro = test()
try:
for x in coro:
try:
yield x
except Exception:
print('Exception is NOT here.')
except Exception:
print('Exception is here.')
try:
next(coro)
except StopIteration:
print('And after first exception generator is exhausted.')
if __name__ == '__main__':
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(main())
finally:
loop.close()
Output:
Exception is here.
And after first exception generator is exhausted.
Even if it was possible to resume, consider what will happen if coroutine already did some cleanup operations due to exception.
Given all above, if some coroutine raised exception only option you have is to suppress this exception (if you want) and re-run this coroutine. You can rerun it after some event if you want. Something like this:
def restart(ready_to_restart):
def wrapper(func):
#asyncio.coroutine
def wrapped(*args, **kwargs):
while True:
try:
return (yield from func(*args, **kwargs))
except (ConnectionClosed,
aiohttp.ClientError,
websockets.WebSocketProtocolError,
ConnectionClosed,
# bunch of other network errors
) as ex:
yield from ready_to_restart.wait()
ready_to_restart = asyncio.Event() # set it when you sure network is fine
# and you're ready to restart
Upd
However, how would I make the coroutine continue where it was
interrupted now?
Just to make things clear:
#asyncio.coroutine
def test():
with aiohttp.ClientSession() as client:
yield from client.request_1()
# STEP 1:
# Let's say line above raises error
# STEP 2:
# Imagine you you somehow maged to return to this place
# after exception above to resume execution.
# But what is state of 'client' now?
# It's was freed by context manager when we left coroutine.
yield from client.request_2()
Nor functions, nor coroutines are designed to resume their execution after exception was propagated outside from them.
Only thing that comes to mind is to split complex operation to re-startable little ones while whole complex operation can store it's state:
#asyncio.coroutine
def complex_operation():
with aiohttp.ClientSession() as client:
res = yield from step_1(client)
# res/client - is a state of complex_operation.
# It can be used by re-startable steps.
res = yield from step_2(client, res)
#restart(ready_to_restart)
#asyncio.coroutine
def step_1():
# ...
#restart(ready_to_restart)
#asyncio.coroutine
def step_2():
# ...
I would like to write a python code that run several functions sequentially. Actually, start from first function and check if it doesn't throw any error, then start running second function and so on. I used following strategy, but it didn't stop when first function throws error and keep running other function:
try:
firstFunc()
except:
raise ExceptionFirst('job failed!')
else:
try:
secondFunc()
except:
raise ExceptionSecond('second function failed!')
------------------------------ ADD-ON-------------------------------------------
All functions defined in a separate way and don't have connection with each other. The structure of each function is like following (e.g., first fucntion):
p = subprocess.Popen("abaqus script=py.py", shell=True)
p.communicate() #now wait
if p.returncode == 0:
print("job is successfully done!")
I changed my function as follows and it worked successfully:
p = subprocess.check_call("abaqus script=py.py", shell=True)
if p == 0:
print("job is successfully done!")
But, I stuck with the same problem for one of my functions which has following structure:
p = subprocess.check_call("java -jar javaCode.jar XMLfile.xml", shell=True)
if p == 0:
print("job is successfully done!")
It throws an error, but the python print out "job is successfully done!" for that and keeps running other functions!!
---------------------------------------- Full Code ------------------------------------------------
import subprocess
import sys, os
def abq_script():
p = subprocess.check_call("abaqus script=py.py", shell=True)
if p == 0:
print("job is successfully done!\n")
def abq_java():
p = subprocess.check_call("java -jar FisrtJavaCode.jar", shell=True)
if p == 0:
print("job is successfully done!\n")
def java_job():
p = subprocess.check_call("java -jar secondJavaCode.jar XMLfile.xml", shell=True)
if p == 0:
print("job is successfully done!\n")
def abq_python_java():
funcs = [abq_script, abq_java, java_job]
exc = Exception('job failed!')
for func in funcs:
try:
func()
except Exception as e:
print(str(e))
break
If the first or second function shows an error, the exception throws an exception and program stops from running. But, if last function (java_job) shows an error, program doesn't throw any exception and keeps running.
Put your functions into a list and iterate through them, calling each.
funcs = [firstFunc, secondFunc, ...]
for func in funcs:
try:
func()
except ValueError: # Or whatever specific exception you want to handle...
# Handle it...
break
(See here for discussion of why it's far better to catch the specific exception you're trying to handle.)
Edit:
If each function has a specific set of exceptions that you'd like to catch, but it differs for the functions, you could put them into the list with the functions.
funcs = [
(firstFunc, (ValueError, TypeError)),
(secondFunc, (ZeroDivisionError, NameError)),
# More function and their exceptions here...
]
for func, exceptions_to_catch in funcs:
try:
func()
except exceptions_to_catch:
# Handle it...
break
More edit:
I'd structure this differently - the only thing that differs between jobs is the command to run. You could do this with something like:
commands = [
"abaqus script=py.py",
"java -jar FisrtJavaCode.jar",
"java -jar javaCode.jar XMLfile.xml",
]
for command in commands:
subprocess.check_call(command, shell=True)
print 'command {!r} successfully done!'.format(command)
You don't need to catch the subprocess.CalledProcessError that might be thrown if one of the commands sets a nonzero return code - that'll stop processing the way you want.
Now it sounds like the underlying problem is that java -jar javaCode.jar XMLfile.xml doesn't set the return code correctly when it fails.
Why are you trying to try/except if you're only throwing another exception? Let the original exception bubble up if you're trying to stop execution thereafter.
firstFunc()
secondFunc()
thirdFunc()
we_all_about_the_funk()
I have a piece of code in Python that seems to cause an error probabilistically because it is accessing a server and sometimes that server has a 500 internal server error. I want to keep trying until I do not get the error. My solution was:
while True:
try:
#code with possible error
except:
continue
else:
#the rest of the code
break
This seems like a hack to me. Is there a more Pythonic way to do this?
It won't get much cleaner. This is not a very clean thing to do. At best (which would be more readable anyway, since the condition for the break is up there with the while), you could create a variable result = None and loop while it is None. You should also adjust the variables and you can replace continue with the semantically perhaps correct pass (you don't care if an error occurs, you just want to ignore it) and drop the break - this also gets the rest of the code, which only executes once, out of the loop. Also note that bare except: clauses are evil for reasons given in the documentation.
Example incorporating all of the above:
result = None
while result is None:
try:
# connect
result = get_data(...)
except:
pass
# other code that uses result but is not involved in getting it
Here is one that hard fails after 4 attempts, and waits 2 seconds between attempts. Change as you wish to get what you want form this one:
from time import sleep
for x in range(0, 4): # try 4 times
try:
# msg.send()
# put your logic here
str_error = None
except Exception as str_error:
pass
if str_error:
sleep(2) # wait for 2 seconds before trying to fetch the data again
else:
break
Here is an example with backoff:
from time import sleep
sleep_time = 2
num_retries = 4
for x in range(0, num_retries):
try:
# put your logic here
str_error = None
except Exception as e:
str_error = str(e)
if str_error:
sleep(sleep_time) # wait before trying to fetch the data again
sleep_time *= 2 # Implement your backoff algorithm here i.e. exponential backoff
else:
break
Maybe something like this:
connected = False
while not connected:
try:
try_connect()
connected = True
except ...:
pass
When retrying due to error, you should always:
implement a retry limit, or you may get blocked on an infinite loop
implement a delay, or you'll hammer resources too hard, such as your CPU or the already distressed remote server
A simple generic way to solve this problem while covering those concerns would be to use the backoff library. A basic example:
import backoff
#backoff.on_exception(
backoff.expo,
MyException,
max_tries=5
)
def make_request(self, data):
# do the request
This code wraps make_request with a decorator which implements the retry logic. We retry whenever our specific error MyException occurs, with a limit of 5 retries. Exponential backoff is a good idea in this context to help minimize the additional burden our retries place on the remote server.
The itertools.iter_except recipes encapsulates this idea of "calling a function repeatedly until an exception is raised". It is similar to the accepted answer, but the recipe gives an iterator instead.
From the recipes:
def iter_except(func, exception, first=None):
""" Call a function repeatedly until an exception is raised."""
try:
if first is not None:
yield first() # For database APIs needing an initial cast to db.first()
while True:
yield func()
except exception:
pass
You can certainly implement the latter code directly. For convenience, I use a separate library, more_itertools, that implements this recipe for us (optional).
Code
import more_itertools as mit
list(mit.iter_except([0, 1, 2].pop, IndexError))
# [2, 1, 0]
Details
Here the pop method (or given function) is called for every iteration of the list object until an IndexError is raised.
For your case, given some connect_function and expected error, you can make an iterator that calls the function repeatedly until an exception is raised, e.g.
mit.iter_except(connect_function, ConnectionError)
At this point, treat it as any other iterator by looping over it or calling next().
Here's an utility function that I wrote to wrap the retry until success into a neater package. It uses the same basic structure, but prevents repetition. It could be modified to catch and rethrow the exception on the final try relatively easily.
def try_until(func, max_tries, sleep_time):
for _ in range(0,max_tries):
try:
return func()
except:
sleep(sleep_time)
raise WellNamedException()
#could be 'return sensibleDefaultValue'
Can then be called like this
result = try_until(my_function, 100, 1000)
If you need to pass arguments to my_function, you can either do this by having try_until forward the arguments, or by wrapping it in a no argument lambda:
result = try_until(lambda : my_function(x,y,z), 100, 1000)
Maybe decorator based?
You can pass as decorator arguments list of exceptions on which we want to retry and/or number of tries.
def retry(exceptions=None, tries=None):
if exceptions:
exceptions = tuple(exceptions)
def wrapper(fun):
def retry_calls(*args, **kwargs):
if tries:
for _ in xrange(tries):
try:
fun(*args, **kwargs)
except exceptions:
pass
else:
break
else:
while True:
try:
fun(*args, **kwargs)
except exceptions:
pass
else:
break
return retry_calls
return wrapper
from random import randint
#retry([NameError, ValueError])
def foo():
if randint(0, 1):
raise NameError('FAIL!')
print 'Success'
#retry([ValueError], 2)
def bar():
if randint(0, 1):
raise ValueError('FAIL!')
print 'Success'
#retry([ValueError], 2)
def baz():
while True:
raise ValueError('FAIL!')
foo()
bar()
baz()
of course the 'try' part should be moved to another funcion becouse we using it in both loops but it's just example;)
Like most of the others, I'd recommend trying a finite number of times and sleeping between attempts. This way, you don't find yourself in an infinite loop in case something were to actually happen to the remote server.
I'd also recommend continuing only when you get the specific exception you're expecting. This way, you can still handle exceptions you might not expect.
from urllib.error import HTTPError
import traceback
from time import sleep
attempts = 10
while attempts > 0:
try:
#code with possible error
except HTTPError:
attempts -= 1
sleep(1)
continue
except:
print(traceback.format_exc())
#the rest of the code
break
Also, you don't need an else block. Because of the continue in the except block, you skip the rest of the loop until the try block works, the while condition gets satisfied, or an exception other than HTTPError comes up.
what about the retrying library on pypi?
I have been using it for a while and it does exactly what I want and more (retry on error, retry when None, retry with timeout). Below is example from their website:
import random
from retrying import retry
#retry
def do_something_unreliable():
if random.randint(0, 10) > 1:
raise IOError("Broken sauce, everything is hosed!!!111one")
else:
return "Awesome sauce!"
print do_something_unreliable()
e = ''
while e == '':
try:
response = ur.urlopen('https://https://raw.githubusercontent.com/MrMe42/Joe-Bot-Home-Assistant/mac/Joe.py')
e = ' '
except:
print('Connection refused. Retrying...')
time.sleep(1)
This should work. It sets e to '' and the while loop checks to see if it is still ''. If there is an error caught be the try statement, it prints that the connection was refused, waits 1 second and then starts over. It will keep going until there is no error in try, which then sets e to ' ', which kills the while loop.
Im attempting this now, this is what i came up with;
placeholder = 1
while placeholder is not None:
try:
#Code
placeholder = None
except Exception as e:
print(str(datetime.time(datetime.now()))[:8] + str(e)) #To log the errors
placeholder = e
time.sleep(0.5)
continue
Here is a short piece of code I use to capture the error as a string. Will retry till it succeeds. This catches all exceptions but you can change this as you wish.
start = 0
str_error = "Not executed yet."
while str_error:
try:
# replace line below with your logic , i.e. time out, max attempts
start = raw_input("enter a number, 0 for fail, last was {0}: ".format(start))
new_val = 5/int(start)
str_error=None
except Exception as str_error:
pass
WARNING: This code will be stuck in a forever loop until no exception occurs. This is just a simple example and MIGHT require you to break out of the loop sooner or sleep between retries.
There is a socket related function call in my code, that function is from another module thus out of my control, the problem is that it blocks for hours occasionally, which is totally unacceptable, How can I limit the function execution time from my code? I guess the solution must utilize another thread.
An improvement on #rik.the.vik's answer would be to use the with statement to give the timeout function some syntactic sugar:
import signal
from contextlib import contextmanager
class TimeoutException(Exception): pass
#contextmanager
def time_limit(seconds):
def signal_handler(signum, frame):
raise TimeoutException("Timed out!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds)
try:
yield
finally:
signal.alarm(0)
try:
with time_limit(10):
long_function_call()
except TimeoutException as e:
print("Timed out!")
I'm not sure how cross-platform this might be, but using signals and alarm might be a good way of looking at this. With a little work you could make this completely generic as well and usable in any situation.
http://docs.python.org/library/signal.html
So your code is going to look something like this.
import signal
def signal_handler(signum, frame):
raise Exception("Timed out!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(10) # Ten seconds
try:
long_function_call()
except Exception, msg:
print "Timed out!"
Here's a Linux/OSX way to limit a function's running time. This is in case you don't want to use threads, and want your program to wait until the function ends, or the time limit expires.
from multiprocessing import Process
from time import sleep
def f(time):
sleep(time)
def run_with_limited_time(func, args, kwargs, time):
"""Runs a function with time limit
:param func: The function to run
:param args: The functions args, given as tuple
:param kwargs: The functions keywords, given as dict
:param time: The time limit in seconds
:return: True if the function ended successfully. False if it was terminated.
"""
p = Process(target=func, args=args, kwargs=kwargs)
p.start()
p.join(time)
if p.is_alive():
p.terminate()
return False
return True
if __name__ == '__main__':
print run_with_limited_time(f, (1.5, ), {}, 2.5) # True
print run_with_limited_time(f, (3.5, ), {}, 2.5) # False
I prefer a context manager approach because it allows the execution of multiple python statements within a with time_limit statement. Because windows system does not have SIGALARM, a more portable and perhaps more straightforward method could be using a Timer
from contextlib import contextmanager
import threading
import _thread
class TimeoutException(Exception):
def __init__(self, msg=''):
self.msg = msg
#contextmanager
def time_limit(seconds, msg=''):
timer = threading.Timer(seconds, lambda: _thread.interrupt_main())
timer.start()
try:
yield
except KeyboardInterrupt:
raise TimeoutException("Timed out for operation {}".format(msg))
finally:
# if the action ends in specified time, timer is canceled
timer.cancel()
import time
# ends after 5 seconds
with time_limit(5, 'sleep'):
for i in range(10):
time.sleep(1)
# this will actually end after 10 seconds
with time_limit(5, 'sleep'):
time.sleep(10)
The key technique here is the use of _thread.interrupt_main to interrupt the main thread from the timer thread. One caveat is that the main thread does not always respond to the KeyboardInterrupt raised by the Timer quickly. For example, time.sleep() calls a system function so a KeyboardInterrupt will be handled after the sleep call.
Here: a simple way of getting the desired effect:
https://pypi.org/project/func-timeout
This saved my life.
And now an example on how it works: lets say you have a huge list of items to be processed and you are iterating your function over those items. However, for some strange reason, your function get stuck on item n, without raising an exception. You need to other items to be processed, the more the better. In this case, you can set a timeout for processing each item:
import time
import func_timeout
def my_function(n):
"""Sleep for n seconds and return n squared."""
print(f'Processing {n}')
time.sleep(n)
return n**2
def main_controller(max_wait_time, all_data):
"""
Feed my_function with a list of itens to process (all_data).
However, if max_wait_time is exceeded, return the item and a fail info.
"""
res = []
for data in all_data:
try:
my_square = func_timeout.func_timeout(
max_wait_time, my_function, args=[data]
)
res.append((my_square, 'processed'))
except func_timeout.FunctionTimedOut:
print('error')
res.append((data, 'fail'))
continue
return res
timeout_time = 2.1 # my time limit
all_data = range(1, 10) # the data to be processed
res = main_controller(timeout_time, all_data)
print(res)
Doing this from within a signal handler is dangerous: you might be inside an exception handler at the time the exception is raised, and leave things in a broken state. For example,
def function_with_enforced_timeout():
f = open_temporary_file()
try:
...
finally:
here()
unlink(f.filename)
If your exception is raised here(), the temporary file will never be deleted.
The solution here is for asynchronous exceptions to be postponed until the code is not inside exception-handling code (an except or finally block), but Python doesn't do that.
Note that this won't interrupt anything while executing native code; it'll only interrupt it when the function returns, so this may not help this particular case. (SIGALRM itself might interrupt the call that's blocking--but socket code typically simply retries after an EINTR.)
Doing this with threads is a better idea, since it's more portable than signals. Since you're starting a worker thread and blocking until it finishes, there are none of the usual concurrency worries. Unfortunately, there's no way to deliver an exception asynchronously to another thread in Python (other thread APIs can do this). It'll also have the same issue with sending an exception during an exception handler, and require the same fix.
You don't have to use threads. You can use another process to do the blocking work, for instance, maybe using the subprocess module. If you want to share data structures between different parts of your program then Twisted is a great library for giving yourself control of this, and I'd recommend it if you care about blocking and expect to have this trouble a lot. The bad news with Twisted is you have to rewrite your code to avoid any blocking, and there is a fair learning curve.
You can use threads to avoid blocking, but I'd regard this as a last resort, since it exposes you to a whole world of pain. Read a good book on concurrency before even thinking about using threads in production, e.g. Jean Bacon's "Concurrent Systems". I work with a bunch of people who do really cool high performance stuff with threads, and we don't introduce threads into projects unless we really need them.
The only "safe" way to do this, in any language, is to use a secondary process to do that timeout-thing, otherwise you need to build your code in such a way that it will time out safely by itself, for instance by checking the time elapsed in a loop or similar. If changing the method isn't an option, a thread will not suffice.
Why? Because you're risking leaving things in a bad state when you do. If the thread is simply killed mid-method, locks being held, etc. will just be held, and cannot be released.
So look at the process way, do not look at the thread way.
I would usually prefer using a contextmanager as suggested by #josh-lee
But in case someone is interested in having this implemented as a decorator, here's an alternative.
Here's how it would look like:
import time
from timeout import timeout
class Test(object):
#timeout(2)
def test_a(self, foo, bar):
print foo
time.sleep(1)
print bar
return 'A Done'
#timeout(2)
def test_b(self, foo, bar):
print foo
time.sleep(3)
print bar
return 'B Done'
t = Test()
print t.test_a('python', 'rocks')
print t.test_b('timing', 'out')
And this is the timeout.py module:
import threading
class TimeoutError(Exception):
pass
class InterruptableThread(threading.Thread):
def __init__(self, func, *args, **kwargs):
threading.Thread.__init__(self)
self._func = func
self._args = args
self._kwargs = kwargs
self._result = None
def run(self):
self._result = self._func(*self._args, **self._kwargs)
#property
def result(self):
return self._result
class timeout(object):
def __init__(self, sec):
self._sec = sec
def __call__(self, f):
def wrapped_f(*args, **kwargs):
it = InterruptableThread(f, *args, **kwargs)
it.start()
it.join(self._sec)
if not it.is_alive():
return it.result
raise TimeoutError('execution expired')
return wrapped_f
The output:
python
rocks
A Done
timing
Traceback (most recent call last):
...
timeout.TimeoutError: execution expired
out
Notice that even if the TimeoutError is thrown, the decorated method will continue to run in a different thread. If you would also want this thread to be "stopped" see: Is there any way to kill a Thread in Python?
Using simple decorator
Here's the version I made after studying above answers. Pretty straight forward.
def function_timeout(seconds: int):
"""Wrapper of Decorator to pass arguments"""
def decorator(func):
#contextmanager
def time_limit(seconds_):
def signal_handler(signum, frame): # noqa
raise TimeoutException(f"Timed out in {seconds_} seconds!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds_)
try:
yield
finally:
signal.alarm(0)
#wraps(func)
def wrapper(*args, **kwargs):
with time_limit(seconds):
return func(*args, **kwargs)
return wrapper
return decorator
How to use?
#function_timeout(seconds=5)
def my_naughty_function():
while True:
print("Try to stop me ;-p")
Well of course, don't forget to import the function if it is in a separate file.
Here's a timeout function I think I found via google and it works for me.
From:
http://code.activestate.com/recipes/473878/
def timeout(func, args=(), kwargs={}, timeout_duration=1, default=None):
'''This function will spwan a thread and run the given function using the args, kwargs and
return the given default value if the timeout_duration is exceeded
'''
import threading
class InterruptableThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.result = default
def run(self):
try:
self.result = func(*args, **kwargs)
except:
self.result = default
it = InterruptableThread()
it.start()
it.join(timeout_duration)
if it.isAlive():
return it.result
else:
return it.result
The method from #user2283347 is tested working, but we want to get rid of the traceback messages. Use pass trick from Remove traceback in Python on Ctrl-C, the modified code is:
from contextlib import contextmanager
import threading
import _thread
class TimeoutException(Exception): pass
#contextmanager
def time_limit(seconds):
timer = threading.Timer(seconds, lambda: _thread.interrupt_main())
timer.start()
try:
yield
except KeyboardInterrupt:
pass
finally:
# if the action ends in specified time, timer is canceled
timer.cancel()
def timeout_svm_score(i):
#from sklearn import svm
#import numpy as np
#from IPython.core.display import display
#%store -r names X Y
clf = svm.SVC(kernel='linear', C=1).fit(np.nan_to_num(X[[names[i]]]), Y)
score = clf.score(np.nan_to_num(X[[names[i]]]),Y)
#scoressvm.append((score, names[i]))
display((score, names[i]))
%%time
with time_limit(5):
i=0
timeout_svm_score(i)
#Wall time: 14.2 s
%%time
with time_limit(20):
i=0
timeout_svm_score(i)
#(0.04541284403669725, '计划飞行时间')
#Wall time: 16.1 s
%%time
with time_limit(5):
i=14
timeout_svm_score(i)
#Wall time: 5h 43min 41s
We can see that this method may need far long time to interrupt the calculation, we asked for 5 seconds, but it work out in 5 hours.
This code works for Windows Server Datacenter 2016 with python 3.7.3 and I didn't tested on Unix, after mixing some answers from Google and StackOverflow, it finally worked for me like this:
from multiprocessing import Process, Lock
import time
import os
def f(lock,id,sleepTime):
lock.acquire()
print("I'm P"+str(id)+" Process ID: "+str(os.getpid()))
lock.release()
time.sleep(sleepTime) #sleeps for some time
print("Process: "+str(id)+" took this much time:"+str(sleepTime))
time.sleep(sleepTime)
print("Process: "+str(id)+" took this much time:"+str(sleepTime*2))
if __name__ == '__main__':
timeout_function=float(9) # 9 seconds for max function time
print("Main Process ID: "+str(os.getpid()))
lock=Lock()
p1=Process(target=f, args=(lock,1,6,)) #Here you can change from 6 to 3 for instance, so you can watch the behavior
start=time.time()
print(type(start))
p1.start()
if p1.is_alive():
print("process running a")
else:
print("process not running a")
while p1.is_alive():
timeout=time.time()
if timeout-start > timeout_function:
p1.terminate()
print("process terminated")
print("watching, time passed: "+str(timeout-start) )
time.sleep(1)
if p1.is_alive():
print("process running b")
else:
print("process not running b")
p1.join()
if p1.is_alive():
print("process running c")
else:
print("process not running c")
end=time.time()
print("I am the main process, the two processes are done")
print("Time taken:- "+str(end-start)+" secs") #MainProcess terminates at approx ~ 5 secs.
time.sleep(5) # To see if on Task Manager the child process is really being terminated, and it is
print("finishing")
The main code is from this link:
Create two child process using python(windows)
Then I used .terminate() to kill the child process. You can see that the function f calls 2 prints, one after 5 seconds and another after 10 seconds. However, with a 7 seconds sleep and the terminate(), it does not show the last print.
It worked for me, hope it helps!