gevent.Timeout not raised - python

I have a server that does "some stuff" in a section and I have a "with gevent.Timeout(5)" around that. I have some checks going on in another greenlet and through that I noticed that one of the greenlets which did that "some stuff" was running for 45mins. I had to eventually restart the program to kill it (I know of other ways of killing it but that's not the problem..).
I monkey patching using gevent.monkey.patch_all() as well. The "some stuff" part does involve network connections and I am guessing something got stuck in one of those places. I don't understand why the timeout exception was not raised. Does anyone have any idea why the gevent.Timeout exception might not have been raised?

Whenever I've used gevent.Timeout, I've also used it as a context manager but with the second argument False. This way the context manager suppresses any exception and just leaves the chunk of code. You can follow up by checking if, say, the block set a value successfully:
result = None
with gevent.Timeout(5, False):
# Something that stalls
if result == None:
# Take care of business
This has worked for me very reliably. It seems that the default second argument exception to gevent.Timeout is None -- have you tried replacing it with your own exception type? Or even Exception?

Related

How to catch when *any* error occurs in a whole script? [duplicate]

How do you best handle multiple levels of methods in a call hierarchy that raise exceptions, so that if it is a fatal error the program will exit (after displaying an error dialog)?
I'm basically coming from Java. There I would simply declare any methods as throws Exception, re-throw it and catch it somewhere at the top level.
However, Python is different. My Python code basically looks like the below.
EDIT: added much simpler code...
Main entry function (plugin.py):
def main(catalog):
print "Executing main(catalog)... "
# instantiate generator
gen = JpaAnnotatedClassGenerator(options)
# run generator
try:
gen.generate_bar() # doesn't bubble up
except ValueError as error:
Utilities.show_error("Error", error.message, "OK", "", "")
return
... usually do the real work here if no error
JpaAnnotatedClassGenerator class (engine.py):
class JpaAnnotatedClassGenerator:
def generate_bar(self):
self.generate_value_error()
def generate_value_error(self):
raise ValueError("generate_value_error() raised an error!")
I'd like to return to the caller with an exception that is to be thrown back to that ones call until it reaches the outermost try-except to display an error dialog with the exception's message.
QUESTION:
How is this best done in Python? Do I really have to repeat try-except for every method being called?
BTW: I am using Python 2.6.x and I cannot upgrade due to being bound to MySQL Workbench that provides the interpreter (Python 3 is on their upgrade list).
If you don't catch an exception, it bubbles up the call stack until someone does. If no one catches it, the runtime will get it and die with the exception error message and a full traceback. IOW, you don't have to explicitely catch and reraise your exception everywhere - which would actually defeat the whole point of having exceptions. Actually, despite being primarily used for errors / unexpected conditions, exceptions are first and foremost a control flow tool allowing to break out of the normal execution flow and pass control (and some informations) to any arbitrary place up in the call stack.
From this POV your code seems mostlt correct (caveat: I didn't bother reading the whole thing, just had a quick look), except (no pun indented) for a couple points:
First, you should define your own specific exception class(es) instead of using the builtin ValueError (you can inherit from it if it makes sense to you) so you're sure you only catch the exact exceptions you expect (quite a few layers "under" your own code could raise a ValueError that you didn't expect).
Then, you may (or not, depending on how your code is used) also want to add a catch-all top-level handler in your main() function so you can properly log (using the logger module) all errors and eventually free resources, do some cleanup etc before your process dies.
As a side note, you may also want to learn and use proper string formatting, and - if perfs are an issue at least -, avoid duplicate constant calls like this:
elif AnnotationUtil.is_embeddable_table(table) and AnnotationUtil.is_secondary_table(table):
# ...
elif AnnotationUtil.is_embeddable_table(table):
# ...
elif AnnotationUtil.is_secondary_table(table):
# ...
Given Python's very dynamic nature, neither the compiler nor runtime can safely optimize those repeated calls (the method could have been dynamically redefined between calls), so you have to do it yourself.
EDIT:
When trying to catch the error in the main() function, exceptions DON'T bubble up, but when I use this pattern one level deeper, bubbling-up seems to work.
You can easily check that it works correctly with a simple MCVE:
def deeply_nested():
raise ValueError("foo")
def nested():
return deeply_nested()
def firstline():
return nested()
def main():
try:
firstline()
except ValueError as e:
print("got {}".format(e))
else:
print("you will not see me")
if __name__ == "__main__":
main()
It appears the software that supplies the Python env is somehow treating the main plugin file in a wrong way. Looks I will have to check the MySQL Workbench guys
Uhu... Even embedded, the mechanism expection should still work as expected - at least for the part of the call stack that depends on your main function (can't tell what happens upper in the call stack). But given how MySQL treats errors (what about having your data silently truncated ?), I wouldn't be specially suprised if they hacked the runtime to silently pass any error in plugins code xD
It is fine for errors to bubble up
Python's exceptions are unchecked, meaning you have no obligation to declare or handle them. Even if you know that something may raise, only catch the error if you intend to do something with it. It is fine to have exception-transparent layers, which gracefully abort as an exception bubbles through them:
def logged_get(map: dict, key: str):
result = map[key] # this may raise, but there is no state to corrupt
# the following is not meaningful if an exception occurred
# it is fine for it to be skipped by the exception bubbling up
print(map, '[%s]' % key, '=>', result)
return result
In this case, logged_get will simply forward any KeyError (and others) that are raised by the lookup.
If an outer caller knows how to handle the error, it can do so.
So, just call self.create_collection_embeddable_class_stub the way you do.
It is fine for errors to kill the application
Even if nothing handles an error, the interpreter does. You get a stack trace, showing what went wrong and where. Fatal errors of the kind "only happens if there is a bug" can "safely" bubble up to show what went wrong.
In fact, exiting the interpreter and assertions use this mechanism as well.
>>> assert 2 < 1, "This should never happen"
Traceback (most recent call last):
File "<string>", line 1, in <module>
AssertionError: This should never happen
For many services, you can use this even in deployment - for example, systemd would log that for a Linux system service. Only try to suppress errors for the outside if security is a concern, or if users cannot handle the error.
It is fine to use precise errors
Since exceptions are unchecked, you can use arbitrary many without overstraining your API. This allows to use custom errors that signal different levels of problems:
class DBProblem(Exception):
"""Something is wrong about our DB..."""
class DBEntryInconsistent(DBProblem):
"""A single entry is broken"""
class DBInconsistent(DBProblem):
"""The entire DB is foobar!"""
It is generally a good idea not to re-use builtin errors, unless your use-case actually matches their meaning. This allows to handle errors precisely if needed:
try:
gen.generate_classes(catalog)
except DBEntryInconsistent:
logger.error("aborting due to corrupted entry")
sys.exit(1)
except DBInconsistent as err:
logger.error("aborting due to corrupted DB")
Utility.inform_db_support(err)
sys.exit(1)
# do not handle ValueError, KeyError, MemoryError, ...
# they will show up as a stack trace

Is there anything in Python like a try-finally except it still raises exceptions normally?

Suppose I want to run some function in Python, ensuring that whether it fails or not some cleanup code is always run. Something like this:
try: some_function()
finally: cleanup()
Ok, simple enough. But hold up! If any exceptions occur in the try block, they'll be suppressed. So really this construct did more than I wanted. All I really wanted to do was make sure some cleanup code runs after the function whether it finishes successfully or not. I still want any exceptions which occur in my function to happen normally. Perhaps that would look something like this:
do: some_function()
finally: cleanup()
Of course, that isn't real Python code. The actual way which I've found to do this is like follows:
try: some_function()
except Exception as error: raise error
finally: cleanup()
Eww, gross. I'm adding an extra line to re-throw an exception which I wanted to just happen normally in the first place. And additionally the stack trace now has an extra line in it showing the except Exception as error: raise error bit. This seems less than ideal to me, but it also seems to be the only way to accomplish what I'm trying to do.
So is this really the way I'm supposed to go about this?
If yes, then I have a further question: Why doesn't Python have a dedicated construct for simply ensuring some block of code runs whether some other block succeeds or not?
As far as my small mind is concerned, this whole idea has little to do with exception handling, since I don't actually want to keep exceptions from occurring where they normally would in the stack trace. Therefore, forcing people to use a try-except-finally construct seems just weird to me.
Python does!
try:
1/0
finally:
print("Hello, world!")
print("This will not print.")
Alright so as #user2357112 pointed out it looks like I somehow had the wild misconception that the try part of a try-except-finally construct is what catches exceptions. If anyone else gets confused similarly... it's the except bit that does the catching. Pretty obvious after some thinking, but hey everyone has brain farts sometimes.

Pepper ignores IF ELSE completely

I'm analysing an AWS response in python in the following script:
#var definition
conversationName = 'NO NAME'
#in the MyClass
if len(resp['FaceMatches'])>0:
faceRecognized = resp['FaceMatches'][0]['Face']['ExternalImageId']
self.logger.info(str(faceRecognized))
if resp['FaceMatches'][0]['Face']['ExternalImageId'] == self.conversationName:
self.logger.info("Name is the same")
return
else:
self.logger.info('Name has changed!')
self.conversationName=faceRecognized.split('_')[0]
self.pepperTTS.say("Hi "+str(faceRecognized.split('_')[0])+". Can I help you with something?")
return
else:
self.logger.info("No face rekognized so far.")
return
Problem is with the second IF ELSE. When I run the programm it seems to ignore this IF ELSE completely and neither prints "Name is the same" nor "Name has changed". And it does not show any errors when running the script.
Does anyone see the error or can give some tips to correct the script?
What's most likely happening is that resp['FaceMatches'][0]['Face']['ExternalImageId'] is raising an exception because one of those keys / indexes is wrong, and then the exception is not getting caught and gets swallowed silently - it's unfortunate, but in NAOqi a lot of exceptions get swallowed if no-one catches them (for example, in the callback in an ALMemory subscribe - as you probably have here).
So you should wrap all that chunk in a big try/except and print whatever exception gets caught.
This is a common enough situation that I created a helper library (documented here) with a log_exceptions decorators you can put on any function that swallows exceptions (typically: ALMemory event and signal callbacks; anything called with qi.async, anything called from outside your service ...), so your code doesn't get cluttered with try/except all over the place.

How to ignore all potential exceptions in Python?

I'm writing a custom backup script in Python. Sometimes the mkdir function or the print function or whatever function are failing for diverse reasons. Such exceptions stop the whole script and halt the backup in the middle, which is very frustrating. So far, I've managed these problems by adding try: ... except: ... statements and properly managing these exceptions. However, one day some other statement or function might raise an exception as well because of some other reason that hasn't been triggered yet.
Is there a way to tell a script to proceed anyway? An equivalent of wrapping every single statement of the code in a try: ... except: pass clause? A log would be better of course.
I've noticed that when programming with GUI toolkits like Tkinter, the application keeps running even if exceptions are raised. Is it possible to accomplish this type of thing with the console?
There actually is a module that is supposed to do exactly that: https://github.com/ajalt/fuckitpy.
Although it was obviously written as a joke. I cannot imagine a situation where doing something like that is a good idea. God, I can't believe I'm even suggesting that as a solution.
What you should do instead is identify what lines of code can produce what kind of errors, and handle those errors properly. There is only so much places where errors can actually happen - mostly while interfacing with outside systems, including filesystem, network, user input etc. And remember that actually failing is often better than continuing "work" and messing up your data, files and so on. Exceptions are there for a reason, they are not a result of Guido's malice.
Python has no way of doing that, and for good reasons.
It seems you're confused about what does it mean to write "robust" software: a robust program is not a program that is hard to kill and that will keep running no matter what, but a program that will handle edge cases properly. Keeping running is NOT enough... keeping running doing sensible things is the key point.
Unfortunately there's no way to do reasonable things automatically and you've to think on a case-by-case basis how to handle the error.
Beware that if a program has a lot of catch it's rarely a good program. Exceptions are meant to be raised in a lot of places and caught almost nowhere.
Note also that every catch is potentially a source of bugs... for example:
try:
print my_dict[foo()]
except KeyError:
...
cannot distinguish if the KeyError is coming for accessing a non-existing key in my_dict or if instead escaped from foo(). Rarely the two cases should be handled the same way...
Better is to write:
key = foo()
if key in my_dict:
print my_dict[key]
else:
...
so that only the side case of missing key in my_dict is handled and instead a KeyError exception will stop the program (stopping a program when you're not sure of what it's doing is the only reasonable thing to do).
Python has 'BaseException' as the base class for Exception classes. You can catch and ignore the base class Exception and that will cover all exceptions.
try
... your code here ...
except BaseException as exp:
print "A General Exception Occurred"
try:
# code segment
except:
pass
pass keyword will ignore all the exceptions
Normally, this one should catch everything:
try:
....
except:
pass
Only problem is, that you don't get the exception object with this syntax, but that was not asked for in this case.
You can add a general except block like #Kanwar Saad proposed. The question is, can you continue with your program in a valid state after the exception has been raised?
From the Zen of Python:
Errors should never pass silently.
Unless explicitly silenced.
Trying to catch all exceptions you know is in my opinion is the best way to go here. If you can not explicitly catch an exception you should not try to work around it. You (and your users) should know what exactly went wrong, otherwise your code might become a nightmare to debug.
If you are worried about losing backup data maybe you could do something like this:
def save_unfinished_backup():
# try to find a graceful exit without losing any data
try:
# some code
except OSError:
# handle oS Errors
except Exception:
save_unfinished_backup()
raise
This way you get both: A chance to fend of data loss and the exact error to debug it.
I hope this helps!
On a funny note: You could also use the fuckit module. Which silences ALL errors, including syntax errors. Do not, ever, use this in productive code though.
This should work perfectly. It will not print the "foo", but you will reach the print("bar") without a crash.
import fuckit
with fuckit:
prnt("foo")
print("bar")
New answer for new Gen...
Python now ships with suppress(), this tells the interpreter to suppress the indicated exceptions while running.
Can be easily imported used as below
from contextlib import suppress
with suppress(ValueError):
int('this wont catch')
print('yea')
The above will work, wont raise the ValueError exception of changing invalid int string to int...
It's more clean than third party libraries.
Happy Hacking

Is there a way for workers in multiprocessing.Pool's apply_async to catch errors and continue?

When using multiprocessing.Pool's apply_async(), what happens to breaks in code? This includes, I think, just exceptions, but there may be other things that make the worker functions fail.
import multiprocessing as mp
pool = mp.Pool(mp.cpu_count())
for f in files:
pool.apply_async(workerfunct, args=(*args), callback=callbackfunct)
As I understand it right now, the process/worker fails (all other processes continue) and anything past a thrown error is not executed, EVEN if I catch the error with try/except.
As an example, usually I'd except Errors and put in a default value and/or print out an error message, and the code continues. If my callback function involves writing to file, that's done with default values.
This answerer wrote a little about it:
I suspect the reason you're not seeing anything happen with your example code is because all of your worker function calls are failing. If a worker function fails, callback will never be executed. The failure won't be reported at all unless you try to fetch the result from the AsyncResult object returned by the call to apply_async. However, since you're not saving any of those objects, you'll never know the failures occurred. If I were you, I'd try using pool.apply while you're testing so that you see errors as soon as they occur.
If you're using Python 3.2+, you can use the error_callback keyword argument to to handle exceptions raised in workers.
pool.apply_async(workerfunct, args=(*args), callback=callbackfunct, error_callback=handle_error)
handle_error will be called with the exception object as an argument.
If you're not, you have to wrap all your worker functions in a try/except to ensure your callback is executed. (I think you got the impression that this wouldn't work from my answer in that other question, but that's not the case. Sorry!):
def workerfunct(*args):
try:
# Stuff
except Exception as e:
# Do something here, maybe return e?
pool.apply_async(workerfunct, args=(*args), callback=callbackfunct)
You could also use a wrapper function if you can't/don't want to change the function you actually want to call:
def wrapper(func, *args):
try:
return func(*args)
except Exception as e:
return e
pool.apply_async(wrapper, args=(workerfunct, *args), callback=callbackfunct)

Categories