I have the following mcve:
import logging
class MyGenIt(object):
def __init__(self, name, content):
self.name = name
self.content = content
def __iter__(self):
with self:
for o in self.content:
yield o
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
if exc_type:
logging.error("Aborted %s", self,
exc_info=(exc_type, exc_value, traceback))
And here is sample use:
for x in MyGenIt("foo",range(10)):
if x == 5:
raise ValueError("got 5")
I would like logging.error to report the ValueError, but instead it reports GeneratorExit:
ERROR:root:Aborted <__main__.MyGenIt object at 0x10ca8e350>
Traceback (most recent call last):
File "<stdin>", line 8, in __iter__
GeneratorExit
When I catch GeneratorExit in __iter__:
def __iter__(self):
with self:
try:
for o in self.content:
yield o
except GeneratorExit:
return
nothing is logged (of course) because __exit__ is called with exc_type=None.
Why do I see GeneratorExit instead of ValueError in __exit__?
What do I do to get the desired behavior, i.e., ValueError in __exit__?
Just a quick note that you could "bring the context manager out" of the generator, and by only changing 3 lines get:
import logging
class MyGenIt(object):
def __init__(self, name, content):
self.name = name
self.content = content
def __iter__(self):
for o in self.content:
yield o
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
if exc_type:
logging.error("Aborted %s", self,
exc_info=(exc_type, exc_value, traceback))
with MyGenIt("foo", range(10)) as gen:
for x in gen:
if x == 5:
raise ValueError("got 5")
A context manager that could also act as an iterator -- and would catch caller code exceptions like your ValueError.
The basic problem is that you are trying to use a with statement inside the generator to catch an exception that is raised outside the generator. You cannot get __iter__ to see the ValueError, because __iter__ is not executing at the time the ValueError is raised.
The GeneratorExit exception is raised when the generator itself is deleted, which happens when it is garbage collected. As soon as the exception occurs, the for loop terminates; since the only reference to the generator (the object obtained by calling __iter__) is in the loop expression, terminating the loop removes the only reference to the iterator and makes it available for garbage collection. It appears that here it is being garbage collected immediately, meaning that the GeneratorExit exception happens between the raising of the ValueError and the propagation of that ValueError to the enclosing code. The GeneratorExit is normally handled totally internally; you are only seeing it because your with statement is inside the generator itself.
In other words, the flow goes something like this:
Exception is raised outside the generator
for loop exits
Generator is now available for garbage collection
Generator is garbage collected
Generator's .close() is called
GeneratorExit is raised inside the generator
ValueError propagates to calling code
The last step does not occur until after your context manager has seen the GeneratorExit. When I run your code, I see the ValueError raised after the log message is printed.
You can see that the garbage collection is at work, because if you create another reference to the iterator itself, it will keep the iterator alive, so it won't be garbage collected, and so the GeneratorExit won't occur. That is, this "works":
it = iter(MyGenIt("foo",range(10)))
for x in it:
if x == 5:
raise ValueError("got 5")
The result is that the ValueError propagates and is visible; no GeneratorExit occurs and nothing is logged. You seem to think that the GeneratorExit is somehow "masking" your ValueError, but it isn't really; it's just an artifact introduced by not keeping any other references to the iterator. The fact that GeneratorExit occurs immediately in your example isn't even guaranteed behavior; it's possible that the iterator might not be garbage-collected until some unknown time in the future, and the GeneratorExit would then be logged at that time.
Turning to your larger question of "why do I see GeneratorExit", the answer is that that is the only exception that actually occurs within the generator function. The ValueError occurs outside the generator, so the generator can't catch it. This means your code can't really work in the way you seem to intend it to. Your with statement is inside the generator function. Thus it can only catch exceptions that happen in the process of yielding items from the generator; there generator has no knowledge of what happens between the times when it advances. But your ValueError is raised in the body of the loop over the generator contents. The generator is not executing at this time; it's just sitting there suspended.
You can't use a with statement in a generator to magically trap exceptions that occur in the code that iterates over the generator. The generator does not "know" about the code that iterates over it and can't handle exceptions that occur there. If you want to catch exceptions within the loop body, you need a separate with statement enclosing the loop itself.
The GeneratorExit is raised whenever a generator or coroutine is closed. Even without the context manager, we can replicate the exact condition with a simple generator function that prints out the exception information when it errors (further reducing the provided code to show exactly how and where that exception is generated).
import sys
def dummy_gen():
for idx in range(5):
try:
yield idx
except:
print(sys.exc_info())
raise
for i in dummy_gen():
raise ValueError('foo')
Usage:
(<class 'GeneratorExit'>, GeneratorExit(), <traceback object at 0x7f96b26b4cc8>)
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
ValueError: foo
Note there was also an exception that was raised inside the generator itself, as noted that the except block was executed. Note that the exception was also further raise'd after the print statement but note how that isn't actually shown anywhere, because it is handled internally.
We can also abuse this fact to see if we can manipulate the flow by swallowing the GeneratorExit exception and see what happens. This can be done by removing the raise statement inside the dummy_gen function to get the following output:
(<class 'GeneratorExit'>, GeneratorExit(), <traceback object at 0x7fd1f0438dc8>)
Exception ignored in: <generator object dummy_gen at 0x7fd1f0436518>
RuntimeError: generator ignored GeneratorExit
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
ValueError: foo
Note how there is an internal RuntimeError that was raised that complained about the generator ignoring the GeneratorExit function. So we from this we can clearly see that this exception is produced by the generator itself inside the generator function, and the ValueError that is raised outside that scope is never present inside the generator function.
Since a context manager will trap all exceptions as is, and the context manager is inside the generator function, whatever exception raised inside it will simply be passed to __exit__ as is. Consider the following:
class Context(object):
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
if exc_type:
logging.error("Aborted %s", self,
exc_info=(exc_type, exc_value, traceback))
Modify the dummy_gen to the following:
def dummy_gen():
with Context():
for idx in range(5):
try:
yield idx
except:
print(sys.exc_info())
raise
Running the resulting code:
(<class 'GeneratorExit'>, GeneratorExit(), <traceback object at 0x7f44b8fb8908>)
ERROR:root:Aborted <__main__.Context object at 0x7f44b9032d30>
Traceback (most recent call last):
File "foo.py", line 26, in dummy_gen
yield idx
GeneratorExit
Traceback (most recent call last):
File "foo.py", line 41, in <module>
raise ValueError('foo')
ValueError: foo
The same GeneratorExit that is raised is now presented to the context manager, because this is the behavior that was defined.
Related
look at this:
RuntimeError: No active exception to reraise
I use raise. with out error like this:
class example:
def __getattribute__(self, attr_name):
raise # I mean: AttributeError: '...' object has no attribute '...'
This is raise statement:
raise_stmt ::= "raise" [expression ["from" expression]]
expression is OPTIONAL.
I check this, but this isn't my answer. if error says "No active exception to reraise", so I can active an error. I do not know what this error means. My question is, what is meant by "active exception" and where it is used? Does it help make the code shorter and more optimized? Is it possible to use it for the task I showed a little higher in the code?
When you use raise keyword barely, Python tries to re-raise the currently occurred exception in the current scope, If there is no exception triggered on, you will get RuntimeError: No active exception to re-raise.
To see which exception is active(being handled), you can use sys.exc_info():
import sys
try:
raise ZeroDivisionError()
except ZeroDivisionError:
type_, value, tb = sys.exc_info()
print(type_) # <class 'ZeroDivisionError'>
In the above except block you can use bare raise keyword, which re-raised the ZeroDivisionError exception for you.
If there is no active exception, the returned value of sys.exc_info() is (None, None, None). So you have to use raise keyword followed by a subclass or an instance of BaseException. This is the case in your question inside __getattribute__ method since there is no active exception.
class Example:
def __getattribute__(self, attr_name):
raise AttributeError(f'Error for "{attr_name}".')
obj = Example()
obj.foo # AttributeError: Error for "foo".
From comments:
Active exception means the exception that is currently triggered on, and is in the workflow, If you don't catch it and let it bubbles up, it will terminate the process.
I can find this for my questions:
what is meant by "active exception" and where it is used?
then using try-except and code goes to except block, error actives.
about usage we can check error and if Should not be handled, can use
raise without arg (Even In Functions). now error raised without
Reference to line. I do not know if there is another way to do this or not, but I was convinced of this method.
Example:
>>> try:
... 10 / 0
... except[ ZeroDivisionError[ as err]]:
... raise
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
ZeroDivisionError: division by zero
>>> def test():
... raise
...
>>> try:
... 10 / 0
... except[ ZeroDivisionError[ as err]]:
... test()
...
Traceback (most recent call last):
File "<stdin>", line 4, in <module>
File "<stdin>", line 2, in <module>
ZeroDivisionError: division by zero
>>>
Does it help make the code shorter and more optimized?
Yes. By using this method, the code and error are shortened. But in this method, there is less control over the exception procedure, which can sometimes be problematic.
Is it possible to use it for the task I showed a little higher in the code?
No. NotImplemented must be returned for the above code to take effect. I have full confidence in this, but there is a possibility that this alone is not enough.
I'd like to use a context manager within a coroutine. This coroutine should handle unknown number of steps. However, due to unknown number of steps, it's unclear when should the context manager exit. I'd like it to exit when the co-routine goes out of scope / is garbage collected; however this seems not to happen in the example below:
import contextlib
#contextlib.contextmanager
def cm():
print("STARTED")
yield
print("ENDED")
def coro(a: str):
with cm():
print(a)
while True:
val1, val2 = yield
print(val1, val2)
c = coro("HI")
c.send(None)
print("---")
c.send((1, 2))
print("---!")
Output of this program:
STARTED
HI
---
1 2
---!
The context manager never printed "ENDED".
How can I make a coroutine that will support any number of steps, and be guaranteed to exit gracefully? I don't want to make this a responsibility of the caller.
TLDR: So the issue is that when an exception is raised (and not handled) inside a with block. The __exit__ method of the context manager is called with that exception. For contextmanager-decorated generators, this causes the exception to be thrown to the generator. cm does not handle this exception and thus the cleanup code is not run. When coro is garbage collected, its close method is called which throws a GeneratorExit to coro (which then gets thrown to cm). What follows is a detailed description of the above steps.
The close method throws a GeneratorExit to coro which means a GeneratorExit is raised at the point of yield. coro doesn't handle the GeneratorExit so it exits the context via an error. This causes the __exit__ method of the context to be called with an error and error information. What does the __exit__ method from a contextmanager-decorated generator do? If it is called with an exception, it throws that exception to the underlying generator.
At this point the a GeneratorExit is raised from the yield statement in the body of our context manager. That unhandled exception causes the cleanup code to not be run. That unhandled exception is raised by context manager and is passed back to the __exit__ of the contextmanager decorator. Being the same error that was thrown, __exit__ returns False to indicate the original error sent to __exit__ was unhandled.
Finally, this continues the GeneratorExit's propagation outside of the with block inside coro where it continues to be unhandled. However, not handling GeneratorExits is regular for generators, so the original close method suppresses the GeneratorExit.
See this part of the yield documentation:
If the generator is not resumed before it is finalized (by reaching a zero reference count or by being garbage collected), the generator-iterator’s close() method will be called, allowing any pending finally clauses to execute.
Looking at the close documentation we see:
Raises a GeneratorExit at the point where the generator function was paused. If the generator function then exits gracefully, is already closed, or raises GeneratorExit (by not catching the exception), close returns to its caller.
This part of the with statement documentation:
The suite is executed.
The context manager’s exit() method is invoked. If an exception caused the suite to be exited, its type, value, and traceback are passed as arguments to exit(). Otherwise, three None arguments are supplied.
And the code of the __exit__ method for the contextmanager decorator.
So with all this context (rim-shot), the easiest way we can get the desired behavior is with a try-except-finally in the definition of our context manager. This is the suggested method from the contextlib docs. And all their examples follow this form.
Thus, you can use a try…except…finally statement to trap the error (if any), or ensure that some cleanup takes place.
import contextlib
#contextlib.contextmanager
def cm():
try:
print("STARTED")
yield
except Exception:
raise
finally:
print("ENDED")
def coro(a: str):
with cm():
print(a)
while True:
val1, val2 = yield
print(val1, val2)
c = coro("HI")
c.send(None)
print("---")
c.send((1, 2))
print("---!")
The output is now:
STARTED
HI
---
1 2
---!
ENDED
as desired.
We could also define our context manager in the traditional manner: as a class with an __enter__ and __exit__ method and still gotten the correct behavior:
class CM:
def __enter__(self):
print('STARTED')
def __exit__(self, exc_type, exc_value, traceback):
print('ENDED')
return False
The situation is somewhat simpler, because we can see exactly what the __exit__ method is without having to go to the source code. The GeneratorExit gets sent (as a parameter) to __exit__ where __exit__ happily runs its cleanup code and then returns False. This is not strictly necessary as otherwise None (another Falsey value) would have been returned, but it indicates that any exception that was sent to __exit__ was not handled. (The return value of __exit__ doesn't matter if there was no exception).
You can do it by telling the coroutine to shutdown by sending it something the will cause it to break out of the loop and return as illustrated below. Doing so will cause a StopIteration exception to be raised where this is done, so I added another context manager to allow it to be suppressed. Note I have also added a coroutine decorator to make them start-up automatically when first called, but that part is strictly optional.
import contextlib
from typing import Callable
QUIT = 'quit'
def coroutine(func: Callable):
""" Decorator to make coroutines automatically start when called. """
def start(*args, **kwargs):
cr = func(*args, **kwargs)
next(cr)
return cr
return start
#contextlib.contextmanager
def ignored(*exceptions):
try:
yield
except exceptions:
pass
#contextlib.contextmanager
def cm():
print("STARTED")
yield
print("ENDED")
#coroutine
def coro(a: str):
with cm():
print(a)
while True:
value = (yield)
if value == QUIT:
break
val1, val2 = value
print(val1, val2)
print("---")
with ignored(StopIteration):
c = coro("HI")
#c.send(None) # No longer needed.
c.send((1, 2))
c.send((3, 5))
c.send(QUIT) # Tell coroutine to clean itself up and exit.
print("---!")
Output:
STARTED
HI
---
1 2
3 5
ENDED
---!
I would like to use raise without printing the traceback on the screen. I know how to do that using try ..catch but doesn't find a way with raise.
Here is an example:
def my_function(self):
resp = self.resp
if resp.status_code == 404:
raise NoSuchElementError('GET'+self.url+'{}'.format(resp.status_code))
elif resp.status_code == 500:
raise ServerErrorError('GET'+self.url+'{}'.format(resp.status_code))
When executing this, if I have a 404, the traceback will print on the screen.
Traceback (most recent call last):
File "test.py", line 32, in <module>
print ins.my_function()
File "api.py", line 820, in my_function
raise NoSuchElementError('GET ' + self.url + ' {} '.format(resp.status_code))
This is an API wrapper and I don't want users to see the traceback but to see the API response codes and error messages instead.
Is there a way to do it ?
I ran into a similar problem where a parent class was using the exception value on raise to pass messages through but where I didn't want to dump the traceback. #lejlot gives a great solution using sys.excepthook but I needed to apply it with a more limited scope. Here's the modification:
import sys
from contextlib import contextmanager
#contextmanager
def except_handler(exc_handler):
"Sets a custom exception handler for the scope of a 'with' block."
sys.excepthook = exc_handler
yield
sys.excepthook = sys.__excepthook__
Then, to use it:
def my_exchandler(type, value, traceback):
print(': '.join([str(type.__name__), str(value)]))
with except_handler(my_exchandler):
raise Exception('Exceptional!')
# -> Exception: Exceptional!
That way, if an exception isn't raised in the block, default exception handling will resume for any subsequent exceptions:
with except_handler(my_exchandler):
pass
raise Exception('Ordinary...')
# -> Traceback (most recent call last):
# -> File "raise_and_suppress_traceback.py", line 22, in <module>
# -> raise Exception('Ordinary...')
# -> Exception: Ordinary...
The problem is not with raising anything, but with what python interpreter does, when your program terminates with an exception (and it simply prints the stack trace). What you should do if you want to avoid it, is to put try except block around everything that you want to "hide" the stack trace, like:
def main():
try:
actual_code()
except Exception as e:
print(e)
The other way around is to modify the exeption handler, sys.excepthook(type, value, traceback), to do your own logic, like
def my_exchandler(type, value, traceback):
print(value)
import sys
sys.excepthook = my_exchandler
you can even condition of exception type and do the particular logic iff it is your type of exception, and otherwise - backoff to the original one.
Modified #Alec answer:
#contextmanager
def disable_exception_traceback():
"""
All traceback information is suppressed and only the exception type and value are printed
"""
default_value = getattr(sys, "tracebacklimit", 1000) # `1000` is a Python's default value
sys.tracebacklimit = 0
yield
sys.tracebacklimit = default_value # revert changes
Usage:
with disable_exception_traceback():
raise AnyYourCustomException()
Use this if you only need to hide a traceback without modifying an exception message. Tested on Python 3.8
UPD: code improved by #DrJohnAStevenson comment
Catch the exception, log it and return something that indicates something went wrong to the consumer (sending a 200 back when a query failed will likely cause problems for your client).
try:
return do_something()
except NoSuchElementError as e:
logger.error(e)
return error_response()
The fake error_response() function could do anything form returning an empty response or an error message. You should still make use of proper HTTP status codes. It sounds like you should be returning a 404 in this instance.
You should handle exceptions gracefully but you shouldn't hide errors from clients completely. In the case of your NoSuchElementError exception it sounds like the client should be informed (the error might be on their end).
You can create a class that takes two values; Type and code for a custom Exception Message. Afterwards, you can just pass the class in a try/except statement.
class ExceptionHandler(Exception):
def __init__(self, exceptionType, code):
self.exceptionType = exceptionType
self.code = code
print(f"Error logged: {self.exceptionType}, Code: {self.code}")
try:
raise(ExceptionHandler(exceptionType=KeyboardInterrupt, code=101))
except Exception:
pass
I think I've read that exceptions inside a with do not allow __exit__ to be call correctly. If I am wrong on this note, pardon my ignorance.
So I have some pseudo code here, my goal is to use a lock context that upon __enter__ logs a start datetime and returns a lock id, and upon __exit__ records an end datetime and releases the lock:
def main():
raise Exception
with cron.lock() as lockid:
print('Got lock: %i' % lockid)
main()
How can I still raise errors in addition to existing the context safely?
Note: I intentionally raise the base exception in this pseudo-code as I want to exit safely upon any exception, not just expected exceptions.
Note: Alternative/standard concurrency prevention methods are irrelevant, I want to apply this knowledge to any general context management. I do not know if different contexts have different quirks.
PS. Is the finally block relevant?
The __exit__ method is called as normal if the context manager is broken by an exception. In fact, the parameters passed to __exit__ all have to do with handling this case! From the docs:
object.__exit__(self, exc_type, exc_value, traceback)
Exit the runtime context related to this object. The parameters describe the exception that caused the context to be exited. If the context was exited without an exception, all three arguments will be None.
If an exception is supplied, and the method wishes to suppress the exception (i.e., prevent it from being propagated), it should return a true value. Otherwise, the exception will be processed normally upon exit from this method.
Note that __exit__() methods should not reraise the passed-in exception; this is the caller’s responsibility.
So you can see that the __exit__ method will be executed and then, by default, any exception will be re-raised after exiting the context manager. You can test this yourself by creating a simple context manager and breaking it with an exception:
DummyContextManager(object):
def __enter__(self):
print('Entering...')
def __exit__(self, exc_type, exc_value, traceback):
print('Exiting...')
# If we returned True here, any exception would be suppressed!
with DummyContextManager() as foo:
raise Exception()
When you run this code, you should see everything you want (might be out of order since print tends to end up in the middle of tracebacks):
Entering...
Exiting...
Traceback (most recent call last):
File "C:\foo.py", line 8, in <module>
raise Exception()
Exception
The best practice when using #contextlib.contextmanager was not quite clear to me from the above answer. I followed the link in the comment from #BenUsman.
If you are writing a context manager you must wrap the yield in try-finally block:
from contextlib import contextmanager
#contextmanager
def managed_resource(*args, **kwds):
# Code to acquire resource, e.g.:
resource = acquire_resource(*args, **kwds)
try:
yield resource
finally:
# Code to release resource, e.g.:
release_resource(resource)
>>> with managed_resource(timeout=3600) as resource:
... # Resource is released at the end of this block,
... # even if code in the block raises an exception
I encountered a strange behaviour in Python's with-statement recently. I have a code which uses Python's context managers to rollback configuration changes in __exit__ method. The manager had a return False value in a finally block in __exit__. I've isolated the case in following code - the only difference is with the indent of return statement:
class Manager1(object):
def release(self):
pass # Implementation not important
def rollback(self):
# Rollback fails throwing an exception:
raise Exception("A failure")
def __enter__(self):
print "ENTER1"
def __exit__(self, exc_type, exc_val, exc_tb):
print "EXIT1"
try:
self.rollback()
finally:
self.release()
return False # The only difference here!
class Manager2(object):
def release(self):
pass # Implementation not important
def rollback(self):
# Rollback fails throwing an exception:
raise Exception("A failure")
def __enter__(self):
print "ENTER2"
def __exit__(self, exc_type, exc_val, exc_tb):
print "EXIT2"
try:
self.rollback()
finally:
self.release()
return False # The only difference here!
In the code above the rollback fails of with an Exception. My question is, why Manager1 is behaving differently than Manager2. The exception is not thrown outside of with-statement in Manager1 and why it IS thrown on exit in Manager2.
with Manager1() as m:
pass # The Exception is NOT thrown on exit here
with Manager2() as m:
pass # The Exception IS thrown on exit here
According to documentation of __exit__:
If an exception is supplied, and the method wishes to suppress the
exception (i.e., prevent it from being propagated), it should return a
true value. Otherwise, the exception will be processed normally upon
exit from this method.
In my opinion in both cases the exit is not returning True, thus the exception should not be supressed in both cases. However in Manager1 it is. Can anyone explain that?
I use Python 2.7.6.
If the finally clause is activated that means that either the try block has successfully completed, or it raised an error that has been processed, or that the try block executed a return.
In Manager1 the execution of the return statement as part of the finally clause makes it terminate normally, returning False. In your Manager2 class the finally clause still executes, but if it was executed as a result of an exception being raised it does nothing to stop that exception propagating back up the call chain until caught (or until it terminates you program with a traceback).
Manager2.__exit__() will only return False if no exception is raised.
I think a good way to understand this is by looking at a separate example that is independent of all the context manager stuff:
>>> def test ():
try:
print('Before raise')
raise Exception()
print('After raise')
finally:
print('In finally')
print('Outside of try/finally')
>>> test()
Before raise
In finally
Traceback (most recent call last):
File "<pyshell#7>", line 1, in <module>
test()
File "<pyshell#6>", line 4, in test
raise Exception()
Exception
So you can see that when an exception is thrown within the try block, any code before the exception is executed and any code inside the finally block is executed. Apart from that, everything else is skipped. That is because the exception that is being thrown ends the function invocation. But because the exception is thrown within a try block, the respective finally block has a final chance to run.
Now, if you comment out the raise line in the function, you will see that all code is executed, since the function does not end prematurely.