In python, should with-statements be used inside a generator? To be clear, I am not asking about using a decorator to create a context manager from a generator function. I am asking whether there is an inherent issue using a with-statement as a context manager inside a generator as it will catch StopIteration and GeneratorExit exceptions in at least some cases. Two examples follow.
A good example of the issue is raised by Beazley's example (page 106). I have modified it to use a with statement so that the files are explicitly closed after the yield in the opener method. I have also added two ways that an exception can be thrown while iterating the results.
import os
import fnmatch
def find_files(topdir, pattern):
for path, dirname, filelist in os.walk(topdir):
for name in filelist:
if fnmatch.fnmatch(name, pattern):
yield os.path.join(path,name)
def opener(filenames):
f = None
for name in filenames:
print "F before open: '%s'" % f
#f = open(name,'r')
with open(name,'r') as f:
print "Fname: %s, F#: %d" % (name, f.fileno())
yield f
print "F after yield: '%s'" % f
def cat(filelist):
for i,f in enumerate(filelist):
if i ==20:
# Cause and exception
f.write('foobar')
for line in f:
yield line
def grep(pattern,lines):
for line in lines:
if pattern in line:
yield line
pylogs = find_files("/var/log","*.log*")
files = opener(pylogs)
lines = cat(files)
pylines = grep("python", lines)
i = 0
for line in pylines:
i +=1
if i == 10:
raise RuntimeError("You're hosed!")
print 'Counted %d lines\n' % i
In this example, the context manager successfully closes the files in the opener function. When an exception is raised, I see the trace back from the exception, but the generator stops silently. If the with-statement catches the exception why doesn't the generator continue?
When I define my own context managers for use inside a generator. I get runtime errors saying that I have ignored a GeneratorExit. For example:
class CManager(object):
def __enter__(self):
print " __enter__"
return self
def __exit__(self, exctype, value, tb):
print " __exit__; excptype: '%s'; value: '%s'" % (exctype, value)
return True
def foo(n):
for i in xrange(n):
with CManager() as cman:
cman.val = i
yield cman
# Case1
for item in foo(10):
print 'Pass - val: %d' % item.val
# Case2
for item in foo(10):
print 'Fail - val: %d' % item.val
item.not_an_attribute
This little demo works fine in case1 with no exceptions raised, but fails in case2 where an attribute error is raised. Here I see a RuntimeException raised because the with statement has caught and ignored a GeneratorExit exception.
Can someone help clarify the rules for this tricky use case? I suspect it is something I am doing, or not doing in my __exit__ method. I tried adding code to re-raise GeneratorExit, but that did not help.
from the Data model entry for object.__exit__
If an exception is supplied, and the method wishes to suppress the exception (i.e., prevent it from being propagated), it should return a true value. Otherwise, the exception will be processed normally upon exit from this method.
In your __exit__ function, you're returning True which will suppress all exceptions. If you change it to return False, the exceptions will continue to be raised as normal (with the only difference being that you guarantee that your __exit__ function gets called and you can make sure to clean up after yourself)
For example, changing the code to:
def __exit__(self, exctype, value, tb):
print " __exit__; excptype: '%s'; value: '%s'" % (exctype, value)
if exctype is GeneratorExit:
return False
return True
allows you to do the right thing and not suppress the GeneratorExit. Now you only see the attribute error. Maybe the rule of thumb should be the same as with any Exception handling -- only intercept Exceptions if you know how to handle them. Having an __exit__ return True is on par (maybe slightly worse!) than having a bare except:
try:
something()
except: #Uh-Oh
pass
Note that when the AttributeError is raised (and not caught), I believe that causes the reference count on your generator object to drop to 0 which then triggers a GeneratorExit exception within the generator so that it can clean itself up. Using my __exit__, play around with the following two cases and hopefully you'll see what I mean:
try:
for item in foo(10):
print 'Fail - val: %d' % item.val
item.not_an_attribute
except AttributeError:
pass
print "Here" #No reference to the generator left.
#Should see __exit__ before "Here"
and
g = foo(10)
try:
for item in g:
print 'Fail - val: %d' % item.val
item.not_an_attribute
except AttributeError:
pass
print "Here"
b = g #keep a reference to prevent the reference counter from cleaning this up.
#Now we see __exit__ *after* "Here"
class CManager(object):
def __enter__(self):
print " __enter__"
return self
def __exit__(self, exctype, value, tb):
print " __exit__; excptype: '%s'; value: '%s'" % (exctype, value)
if exctype is None:
return
# only re-raise if it's *not* the exception that was
# passed to throw(), because __exit__() must not raise
# an exception unless __exit__() itself failed. But throw()
# has to raise the exception to signal propagation, so this
# fixes the impedance mismatch between the throw() protocol
# and the __exit__() protocol.
#
if sys.exc_info()[1] is not (value or exctype()):
raise
Related
This is the standard language-neutral approach
import logging
logger = logging.getLogger(__file__)
class SomeClass(object):
max_retry = 2
def get(self, key):
try:
return self.__get_value(key)
except Exception:
logger.error("Exception occured even after retrying...")
raise
def __get_value(self, key, retry_num=0):
try:
return self.connection().get(key)
except Exception:
logger.error("Exception occured")
if retry_num < self.max_retry:
retry_num += 1
logger.warning("Retrying!!! Retry count - %s", retry_num)
self.__get_value(key, retry_num)
else:
raise
Is there any better pythonic way to do this?
Cleaner approach would be not to change state of the class since it's retry just for the function call (current implementation wouldn't work as expected when the method is called multiple times). I'd prefer a retry decorator (as loop with break when succeed) used as:
...
#retry(n_times=2, catch=Exception)
def get (self, key):
...
How can I get the name of an exception that was raised in Python?
e.g.,
try:
foo = bar
except Exception as exception:
name_of_exception = ???
assert name_of_exception == 'NameError'
print "Failed with exception [%s]" % name_of_exception
For example, I am catching multiple (or all) exceptions, and want to print the name of the exception in an error message.
Here are a few different ways to get the name of the class of the exception:
type(exception).__name__
exception.__class__.__name__
exception.__class__.__qualname__
e.g.,
try:
foo = bar
except Exception as exception:
assert type(exception).__name__ == 'NameError'
assert exception.__class__.__name__ == 'NameError'
assert exception.__class__.__qualname__ == 'NameError'
If you want the fully qualified class name (e.g. sqlalchemy.exc.IntegrityError instead of just IntegrityError), you can use the function below, which I took from MB's awesome answer to another question (I just renamed some variables to suit my tastes):
def get_full_class_name(obj):
module = obj.__class__.__module__
if module is None or module == str.__class__.__module__:
return obj.__class__.__name__
return module + '.' + obj.__class__.__name__
Example:
try:
# <do something with sqlalchemy that angers the database>
except sqlalchemy.exc.SQLAlchemyError as e:
print(get_full_class_name(e))
# sqlalchemy.exc.IntegrityError
You can print the exception using some formated strings:
Example:
try:
#Code to execute
except Exception as err:
print(f"{type(err).__name__} was raised: {err}")
You can also use sys.exc_info(). exc_info() returns 3 values: type, value, traceback. On documentation: https://docs.python.org/3/library/sys.html#sys.exc_info
import sys
try:
foo = bar
except Exception:
exc_type, value, traceback = sys.exc_info()
assert exc_type.__name__ == 'NameError'
print "Failed with exception [%s]" % exc_type.__name__
This works, but it seems like there must be an easier, more direct way?
try:
foo = bar
except Exception as exception:
assert repr(exception) == '''NameError("name 'bar' is not defined",)'''
name = repr(exception).split('(')[0]
assert name == 'NameError'
The other answers here are great for exploration purposes, but if the primary goal is to log the exception (including the name of the exception), perhaps consider using logging.exception instead of print?
I am defining a context manager class and I would like to be able to skip the block of code without raising an exception if certain conditions are met during instantiation. For example,
class My_Context(object):
def __init__(self,mode=0):
"""
if mode = 0, proceed as normal
if mode = 1, do not execute block
"""
self.mode=mode
def __enter__(self):
if self.mode==1:
print 'Exiting...'
CODE TO EXIT PREMATURELY
def __exit__(self, type, value, traceback):
print 'Exiting...'
with My_Context(mode=1):
print 'Executing block of codes...'
According to PEP-343, a with statement translates from:
with EXPR as VAR:
BLOCK
to:
mgr = (EXPR)
exit = type(mgr).__exit__ # Not calling it yet
value = type(mgr).__enter__(mgr)
exc = True
try:
try:
VAR = value # Only if "as VAR" is present
BLOCK
except:
# The exceptional case is handled here
exc = False
if not exit(mgr, *sys.exc_info()):
raise
# The exception is swallowed if exit() returns true
finally:
# The normal and non-local-goto cases are handled here
if exc:
exit(mgr, None, None, None)
As you can see, there is nothing obvious you can do from the call to the __enter__() method of the context manager that can skip the body ("BLOCK") of the with statement.
People have done Python-implementation-specific things, such as manipulating the call stack inside of the __enter__(), in projects such as withhacks. I recall Alex Martelli posting a very interesting with-hack on stackoverflow a year or two back (don't recall enough of the post off-hand to search and find it).
But the simple answer to your question / problem is that you cannot do what you're asking, skipping the body of the with statement, without resorting to so-called "deep magic" (which is not necessarily portable between python implementations). With deep magic, you might be able to do it, but I recommend only doing such things as an exercise in seeing how it might be done, never in "production code".
If you want an ad-hoc solution that uses the ideas from withhacks (specifically from AnonymousBlocksInPython), this will work:
import sys
import inspect
class My_Context(object):
def __init__(self,mode=0):
"""
if mode = 0, proceed as normal
if mode = 1, do not execute block
"""
self.mode=mode
def __enter__(self):
if self.mode==1:
print 'Met block-skipping criterion ...'
# Do some magic
sys.settrace(lambda *args, **keys: None)
frame = inspect.currentframe(1)
frame.f_trace = self.trace
def trace(self, frame, event, arg):
raise
def __exit__(self, type, value, traceback):
print 'Exiting context ...'
return True
Compare the following:
with My_Context(mode=1):
print 'Executing block of code ...'
with
with My_Context(mode=0):
print 'Executing block of code ... '
A python 3 update to the hack mentioned by other answers from
withhacks (specifically from AnonymousBlocksInPython):
class SkipWithBlock(Exception):
pass
class SkipContextManager:
def __init__(self, skip):
self.skip = skip
def __enter__(self):
if self.skip:
sys.settrace(lambda *args, **keys: None)
frame = sys._getframe(1)
frame.f_trace = self.trace
def trace(self, frame, event, arg):
raise SkipWithBlock()
def __exit__(self, type, value, traceback):
if type is None:
return # No exception
if issubclass(type, SkipWithBlock):
return True # Suppress special SkipWithBlock exception
with SkipContextManager(skip=True):
print('In the with block') # Won't be called
print('Out of the with block')
As mentioned before by joe, this is a hack that should be avoided:
The method trace() is called when a new local scope is entered, i.e. right when the code in your with block begins. When an exception is raised here it gets caught by exit(). That's how this hack works. I should add that this is very much a hack and should not be relied upon. The magical sys.settrace() is not actually a part of the language definition, it just happens to be in CPython. Also, debuggers rely on sys.settrace() to do their job, so using it yourself interferes with that. There are many reasons why you shouldn't use this code. Just FYI.
Based on #Peter's answer, here's a version that uses no string manipulations but should work the same way otherwise:
from contextlib import contextmanager
#contextmanager
def skippable_context(skip):
skip_error = ValueError("Skipping Context Exception")
prev_entered = getattr(skippable_context, "entered", False)
skippable_context.entered = False
def command():
skippable_context.entered = True
if skip:
raise skip_error
try:
yield command
except ValueError as err:
if err != skip_error:
raise
finally:
assert skippable_context.entered, "Need to call returned command at least once."
skippable_context.entered = prev_entered
print("=== Running with skip disabled ===")
with skippable_context(skip=False) as command:
command()
print("Entering this block")
print("... Done")
print("=== Running with skip enabled ===")
with skippable_context(skip=True) as command:
command()
raise NotImplementedError("... But this will never be printed")
print("... Done")
What you're trying to do isn't possible, unfortunately. If __enter__ raises an exception, that exception is raised at the with statement (__exit__ isn't called). If it doesn't raise an exception, then the return value is fed to the block and the block executes.
Closest thing I could think of is a flag checked explicitly by the block:
class Break(Exception):
pass
class MyContext(object):
def __init__(self,mode=0):
"""
if mode = 0, proceed as normal
if mode = 1, do not execute block
"""
self.mode=mode
def __enter__(self):
if self.mode==1:
print 'Exiting...'
return self.mode
def __exit__(self, type, value, traceback):
if type is None:
print 'Normal exit...'
return # no exception
if issubclass(type, Break):
return True # suppress exception
print 'Exception exit...'
with MyContext(mode=1) as skip:
if skip: raise Break()
print 'Executing block of codes...'
This also lets you raise Break() in the middle of a with block to simulate a normal break statement.
Context managers are not the right construct for this. You're asking for the body to be executed n times, in this case zero or one. If you look at the general case, n where n >= 0, you end up with a for loop:
def do_squares(n):
for i in range(n):
yield i ** 2
for x in do_squares(3):
print('square: ', x)
for x in do_squares(0):
print('this does not print')
In your case, which is more special purpose, and doesn't require binding to the loop variable:
def should_execute(mode=0):
if mode == 0:
yield
for _ in should_execute(0):
print('this prints')
for _ in should_execute(1):
print('this does not')
Another slightly hacky option makes use of exec. This is handy because it can be modified to do arbitrary things (e.g. memoization of context-blocks):
from contextlib import contextmanager
#contextmanager
def skippable_context_exec(skip):
SKIP_STRING = 'Skipping Context Exception'
old_value = skippable_context_exec.is_execed if hasattr(skippable_context_exec, 'is_execed') else False
skippable_context_exec.is_execed=False
command = "skippable_context_exec.is_execed=True; "+("raise ValueError('{}')".format(SKIP_STRING) if skip else '')
try:
yield command
except ValueError as err:
if SKIP_STRING not in str(err):
raise
finally:
assert skippable_context_exec.is_execed, "You never called exec in your context block."
skippable_context_exec.is_execed = old_value
print('=== Running with skip disabled ===')
with skippable_context_exec(skip=False) as command:
exec(command)
print('Entering this block')
print('... Done')
print('=== Running with skip enabled ===')
with skippable_context_exec(skip=True) as command:
exec(command)
print('... But this will never be printed')
print('... Done')
Would be nice to have something that gets rid of the exec without weird side effects, so if you can think of a way I'm all ears. The current lead answer to this question appears to do that but has some issues.
Is it reasonable in Python to catch a generic exception, then use isinstance() to detect the specific type of exception in order to handle it appropriately?
I'm playing around with the dnspython toolkit at the moment, which has a range of exceptions for things like a timeout, an NXDOMAIN response, etc. These exceptions are subclasses of dns.exception.DNSException, so I am wondering if it's reasonable, or pythonic, to catch DNSException then check for a specific exception with isinstance().
e.g.
try:
answers = dns.resolver.query(args.host)
except dns.exception.DNSException as e:
if isinstance(e, dns.resolver.NXDOMAIN):
print "No such domain %s" % args.host
elif isinstance(e, dns.resolver.Timeout):
print "Timed out while resolving %s" % args.host
else:
print "Unhandled exception"
I'm new to Python so be gentle!
That's what multiple except clauses are for:
try:
answers = dns.resolver.query(args.host)
except dns.resolver.NXDOMAIN:
print "No such domain %s" % args.host
except dns.resolver.Timeout:
print "Timed out while resolving %s" % args.host
except dns.exception.DNSException:
print "Unhandled exception"
Be careful about the order of the clauses: The first matching clause will be taken, so move the check for the superclass to the end.
From dns.resolver you can import some exceptions.
(untested code)
from dns.resolver import Resolver, NXDOMAIN, NoNameservers, Timeout, NoAnswer
try
host_record = self.resolver.query(self.host, "A")
if len(host_record) > 0:
Mylist['ERROR'] = False
# Do something
except NXDOMAIN:
Mylist['ERROR'] = True
Mylist['ERRORTYPE'] = NXDOMAIN
except NoNameservers:
Mylist['ERROR'] = True
Mylist['ERRORTYPE'] = NoNameservers
except Timeout:
Mylist['ERROR'] = True
Mylist['ERRORTYPE'] = Timeout
except NameError:
Mylist['ERROR'] = True
Mylist['ERRORTYPE'] = NameError
I am aware that I can open multiple files with something like,
with open('a', 'rb') as a, open('b', 'rb') as b:
But I have a situation where I have a list of files to open and am wondering what the preferred method is of doing the same when the number of files is unknown in advance. Something like,
with [ open(f, 'rb') for f in files ] as fs:
(but this fails with an AttributeError since list doesn't implement __exit__)
I don't mind using something like,
try:
fs = [ open(f, 'rb') for f in files ]
....
finally:
for f in fs:
f.close()
But am not sure what will happen if some files throw when trying to open them. Will fs be properly defined, with the files that did manage to open, in the finally block?
No, your code wouldn't initialise fs unless all open() calls completed successfully. This should work though:
fs = []
try:
for f in files:
fs.append(open(f, 'rb'))
....
finally:
for f in fs:
f.close()
Note also that f.close() could fail so you may want to catch and ignore (or otherwise handle) any failures there.
Sure, why not, Here's a recipe that should do it. Create a context manager 'pool' that can enter an arbitrary number of contexts (by calling it's enter() method) and they will be cleaned up at the end of the end of the suite.
class ContextPool(object):
def __init__(self):
self._pool = []
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, exc_tb):
for close in reversed(self._pool):
close(exc_type, exc_value, exc_tb)
def enter(self, context):
close = context.__exit__
result = context.__enter__()
self._pool.append(close)
return result
For example:
>>> class StubContextManager(object):
... def __init__(self, name):
... self.__name = name
... def __repr__(self):
... return "%s(%r)" % (type(self).__name__, self.__name)
...
... def __enter__(self):
... print "called %r.__enter__()" % (self)
...
... def __exit__(self, *args):
... print "called %r.__exit__%r" % (self, args)
...
>>> with ContextPool() as pool:
... pool.enter(StubContextManager("foo"))
... pool.enter(StubContextManager("bar"))
... 1/0
...
called StubContextManager('foo').__enter__()
called StubContextManager('bar').__enter__()
called StubContextManager('bar').__exit__(<type 'exceptions.ZeroDivisionError'>, ZeroDivisionError('integer division or modulo by zero',), <traceback object at 0x02958648>)
called StubContextManager('foo').__exit__(<type 'exceptions.ZeroDivisionError'>, ZeroDivisionError('integer division or modulo by zero',), <traceback object at 0x02958648>)
Traceback (most recent call last):
File "<pyshell#67>", line 4, in <module>
1/0
ZeroDivisionError: integer division or modulo by zero
>>>
Caveats: context managers aren't supposed to raise exceptions in their __exit__() methods, but if they do, this recipe doesn't do the cleanup for all the context managers. Similarly, even if every context manager indicates that an exception should be ignored (by returning True from their exit methods), this will still allow the exception to be raised.
The class ExitStack from the contextlib module provides the functionality you are looking for.
The canonical use-case that is mentioned in the documentation is managing a dynamic number of files.
with ExitStack() as stack:
files = [stack.enter_context(open(fname)) for fname in filenames]
# All opened files will automatically be closed at the end of
# the with statement, even if attempts to open files later
# in the list raise an exception
Errors can occur when attempting to open a file, when attempting to read from a file, and (very rarely) when attempting to close a file.
So a basic error handling structure might look like:
try:
stream = open(path)
try:
data = stream.read()
finally:
stream.close()
except EnvironmentError as exception:
print 'ERROR:', str(exception)
else:
print 'SUCCESS'
# process data
This ensures that close will always be called if the stream variable exists. If stream doesn't exist, then open must have failed, and so there is no file to close (in which case, the except block will be executed immediately).
Do you really need to have the files open in parallel, or can they be processed sequentially? If the latter, then something like the above file-processing code should be put in a function, which is then called for each path in the list.
Thanks for all your answers. Taking inspiration from all of you, I have come up with the following. I think (hope) it works as I intended. I wasn't sure whether to post it as an answer or an addition to the question, but thought an answer was more appropriate as then if it fails to do what I'd asked it can be commented on appropriately.
It can be used for example like this ..
with contextlist( [open, f, 'rb'] for f in files ) as fs:
....
or like this ..
f_lock = threading.Lock()
with contextlist( f_lock, ([open, f, 'rb'] for f in files) ) as (lock, *fs):
....
And here it is,
import inspect
import collections
import traceback
class contextlist:
def __init__(self, *contexts):
self._args = []
for ctx in contexts:
if inspect.isgenerator(ctx):
self._args += ctx
else:
self._args.append(ctx)
def __enter__(self):
if hasattr(self, '_ctx'):
raise RuntimeError("cannot reenter contextlist")
s_ctx = self._ctx = []
try:
for ctx in self._args:
if isinstance(ctx, collections.Sequence):
ctx = ctx[0](*ctx[1:])
s_ctx.append(ctx)
try:
ctx.__enter__()
except Exception:
s_ctx.pop()
raise
return s_ctx
except:
self.__exit__()
raise
def __exit__(self, *exc_info):
if not hasattr(self, '_ctx'):
raise RuntimeError("cannot exit from unentered contextlist")
e = []
for ctx in reversed(self._ctx):
try:
ctx.__exit__()
except Exception:
e.append(traceback.format_exc())
del self._ctx
if not e == []:
raise Exception('\n> '*2+(''.join(e)).replace('\n','\n> '))