Returning value when exiting python context manager - python

Maybe this is a stupid (and indeed not very practical) question but I'm asking it because I can't wrap my head around it.
While researching if a return statement inside a call to a context manager would prevent __exit__ from being called (no it doesn't), I found that it seems common to make an analogy between __exit__ and finally in a try/finally block (for example here: https://stackoverflow.com/a/9885287/3471881) because:
def test():
try:
return True
finally:
print("Good bye")
Would execute the same as:
class MyContextManager:
def __enter__(self):
return self
def __exit__(self, *args):
print('Good bye')
def test():
with MyContextManager():
return True
This really helped me understand how cm:s work but after playing around a bit I realised that this analogy wont work if we are returning something rather than printing.
def test():
try:
return True
finally:
return False
test()
--> False
While __exit__ seemingly wont return at all:
class MyContextManager:
def __enter__(self):
return self
def __exit__(self, *args):
return False
def test():
with MyContextManager():
return True
test()
--> True
This lead me to think that perhaps you can't actually return anything inside __exit__, but you can:
class MyContextManager:
def __enter__(self):
return self
def __exit__(self, *args):
return self.last_goodbye()
def last_goodbye(self):
print('Good bye')
def test():
with MyContextManager():
return True
test()
--> Good bye
--> True
Note that it doesn't matter if we don't return anything inside the test() function.
This leads me to my question:
Is it impossible to return a value from inside __exit__ and if so, why?

Yes. It is impossible to alter the return value of the context from inside __exit__.
If the context is exited with a return statement, you cannot alter the return value with your context_manager.__exit__. This is different from a try ... finally ... clause, because the code in finally still belongs to the parent function, while context_manager.__exit__ runs in its own scope
.
In fact, __exit__ can return a boolean value (True or False) and it will be understood by Python. It tells Python whether the exception that exits the context (if any) should be suppressed (not propagate to outside the context).
See this example of the meaning of the return value of __exit__:
>>> class MyContextManager:
... def __init__(self, suppress):
... self.suppress = suppress
...
... def __enter__(self):
... return self
...
... def __exit__(self, exc_type, exc_obj, exc_tb):
... return self.suppress
...
>>> with MyContextManager(True): # suppress exception
... raise ValueError
...
>>> with MyContextManager(False): # let exception pass through
... raise ValueError
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
ValueError
>>>
In the above example, both ValueErrors will cause the control to jump out of the context. In the first block, the __exit__ method of the context manager returns True, so Python suppresses this exception and it's not reflexed in the REPL. In the second block, the context manager returns False, so Python let the outer code handle the exception, which gets printed out by the REPL.

The workaround is to store the result in an attribute instead of returning it, and access it later. That is if you intend to use that value in more than a print.
For example, take this simple context manager:
class time_this_scope():
"""Context manager to measure how much time was spent in the target scope."""
def __init__(self, allow_print=False):
self.t0 = None
self.dt = None
self.allow_print = allow_print
def __enter__(self):
self.t0 = time.perf_counter()
def __exit__(self, type=None, value=None, traceback=None):
self.dt = (time.perf_counter() - self.t0) # Store the desired value.
if self.allow_print is True:
print(f"Scope took {self.dt*1000: 0.1f} milliseconds.")
It could be used this way:
with time_this_scope(allow_print=True):
time.sleep(0.100)
>>> Scope took 100 milliseconds.
or like so:
timer = time_this_scope()
with timer:
time.sleep(0.100)
dt = timer.dt
Not like what is shown below since the timer object is not accessible anymore as the scope ends. We need to modify the class as described here and add return self value to __enter__. Before the modification, you would get an error:
with time_this_scope() as timer:
time.sleep(0.100)
dt = timer.dt
>>> AttributeError: 'NoneType' object has no attribute 'dt'
Finally, here is a simple use example:
"""Calculate the average time spent sleeping."""
import numpy as np
import time
N = 100
dt_mean = 0
for n in range(N)
timer = time_this_scope()
with timer:
time.sleep(0.001 + np.random.rand()/1000) # 1-2 ms per loop.
dt = timer.dt
dt_mean += dt/N
print(f"Loop {n+1}/{N} took {dt}s.")
print(f"All loops took {dt_mean}s on average.)

Related

Catch any excpetion to avoid memory leak - Python bad/good pratices

I want to make sure a method is called if any exception is raised. In this situation, is it ok (good/bad practices or may lead to unexpected consequences) to try/except any Exception? Here's an example of what's on my mind using a decorator:
# implementation
import sys
import traceback
class AmazingClass:
def __init__(self, arg=None):
self.__att = arg
#property
def att(self, ):
return self.__att
def make_sure_it_quits(method):
def inner(self, *args, **kwargs):
try:
return method(self, *args, **kwargs)
except Exception as err:
print(err, "- This was caught because it couldn't be foreseen.")
traceback.print_exc()
print("\nQuitting what is suppose to be quited...")
self.quit()
return inner
#make_sure_it_quits
def this_may_raise_errors(self, arg):
try:
self.__att += arg
except TypeError as err:
print("This I can handle! Cleaning and exiting...")
self.quit()
# sys.exit(1) # exit, if it's the case
def quit(self, ):
self.__arg = None
print("Everything is very clean now!")
# examples
def no_errors():
obj = AmazingClass("test")
obj.this_may_raise_errors("_01")
print(obj.att)
def with_error_01():
obj = AmazingClass("test")
obj.this_may_raise_erros(1)
print(obj.att)
def with_error_02():
obj = AmazingClass("test")
obj.this_may_raise_errors()
print(obj.att)
# main
if __name__ == '__main__':
no_errors()
with_error_01()
with_error_02()
In this case, with_error_01 represents situations I know in advance that can happen, while with_error_02 is an unexpected use of the class.
In both cases, the use of traceback shows what and where went wrong. Also, the method quit must always be called in case of any error.

Function prints none instead of printing what I want it to print in decorators in Python

I am studying on decorators in Python. I was trying to use the decorators with arguments. I'm having a problem with the decorators. I defined two inner function in the default decorator function. It returns none when I use it as below:
def prefix(write: bool = False):
def thread(func):
def wrapper(*args, **kwargs):
t1 = Thread(target=func, args=args, kwargs=kwargs)
t1.start()
if write:
print("write parameter is true.")
return wrapper
return thread
#prefix(write=True)
def something(x):
return x + x
print(something(5))
As you see, I defined two different functions named prefix and something. If the write parameter is true, it prints the string. But something function is printing "None" instead of printing 5 + 5.
What's wrong?
Well, your wrapper() function doesn't have a return statement, so it will always return None.
Furthermore, how would you expect it to print 5 + 5 (or rather, the result thereof) when that may not have been computed yet, considering you're starting a new thread to do that and never do anything with the return value of func at all?
IOW, if we expand your example a bit:
import time
from threading import Thread
def prefix(write: bool = False):
def thread(func): # <- this function replaces `something`
def wrapper(*args, **kwargs):
t1 = Thread(target=func, args=args, kwargs=kwargs)
t1.start()
if write:
print("write parameter is true.")
return "hernekeitto"
return wrapper
return thread
#prefix(write=True)
def something(x):
print("Computing, computing...")
time.sleep(0.5)
print("Hmm, hmm, hmm...")
time.sleep(0.5)
print("Okay, got it!")
return x + x
value = something(9)
print("The value is:", value)
This will print out
Computing, computing...
write parameter is true.
The value is: hernekeitto
Hmm, hmm, hmm...
Okay, got it!
As you can see, the thread's first print() happens first, then the write print, then the value print, and then the rest of what happens in the thread. And as you can see, we only know what x + x is after "Okay, got it!", so there's no way you could have returned that out of wrapper() where "hernekeitto" is returned.
See futures (or the equivalent JavaScript concept promises) for a "value that's not yet ready":
import time
from concurrent.futures import Future
from threading import Thread
def in_future(func):
def wrapper(*args, **kwargs):
fut = Future()
def func_wrapper():
# Wraps the threaded function to resolve the future.
try:
fut.set_result(func(*args, **kwargs))
except Exception as e:
fut.set_exception(e)
t1 = Thread(target=func_wrapper)
t1.start()
return fut
return wrapper
#in_future
def something(x):
print("Computing, computing...")
time.sleep(0.5)
print("Hmm, hmm, hmm...")
time.sleep(0.5)
print("Okay, got it!")
return x + x
value_fut = something(9)
print("The value is:", value_fut)
print("Waiting for it to be done...")
print("Here it is!", value_fut.result())
This prints out
Computing, computing...
The value is: <Future at 0x... state=pending>
Waiting for it to be done...
Hmm, hmm, hmm...
Okay, got it!
Here it is! 18
so you can see the future is just a "box" where you'll need to wait for the actual value to be done (or an error to occur getting it).
Normally you'd use futures with the executors in concurrent, but the above is an example of how to do it by hand.

Class method wraps a function - Problems with Arguments

In my main, I have a function with an error and a class that tracks errors in a list inside the class itself. In other words, instead of just calling the function, I would like to give this function to a class-method which then "logs" the error in a list and suppresses the error.
Here is my problem:
This function has input arguments. When I hand-over my function to the class-method, I would like to hand-over the inputs, too. What happens is, that the function is being executed before going to the class method. Therefore, the class-method can't suppress the error which happens in the function.
In the code below, I set the variable silent=True, therefore, it should not raise an error (because of the try/except clause within the method). Unfortunately, the code raises a TypeError which comes from the function.
Any advice would be much appreciated
PS: I am not looking for a decorator solution :)
Here is the class with the class method which can suppress the error
class ErrorTracker:
def __init__(self):
self.list = list()
def track_func(self, func, silent=False):
try:
self.list.append('...in trying')
print('....trying.....')
return func
except Exception as e:
self.list.append('...in except')
self.list.append(e) # important line - here the error gets "logged"
if not silent:
raise e
Here is the function with an error
def transformation_with_error(app1, app2):
# DO STUFF HERE with inputs
result = str(app1)+str(app2)
print(result)
print('TYPE ERROR here')
raise TypeError
return result
Here the main routine:
if __name__ == "__main__":
error_tracker = ErrorTracker()
print('-- start transformation')
error_tracker.track_func(transformation_with_error(app1='AA', app2='BB'), silent=True)
print('-- end transformation')
print(error_tracker.list)
If I understand your issue, in your main routine
error_tracker.track_func(transformation_with_error(app1='AA', app2='BB'), silent=True)
calls transformation_with_error before entering error_tracker.track_func. This happens just because you indeed are calling transformation_with_error. If you want your error_tracker.track_func to call transformation_with_error, you have to pass the later as an argument, like you would do for a callback.
For example:
def test(var1, var2):
print("{} {}".format(var1, var2))
def callFn(func, *vars):
func(*vars)
callFn(test, "foo", "bar")
outputs foo bar
Thx VincentRG
That was it
Just for the record, below are the changes I did:
(side note: I added the **kwargs, too, to be able to deal with default values)
thx mate
class changes
class ErrorTracker:
def __init__(self):
self.list = list()
def track_func(self, func, silent=False, *args, **kwargs):
try:
self.list.append('...in trying')
print('....trying.....')
return func(*args, **kwargs)
except Exception as e:
self.list.append('...in except')
self.list.append(e) # important line - here the error gets "logged"
if not silent:
raise e
change in call
if __name__ == "__main__":
error_tracker = ErrorTracker()
print('-- start transformation')
error_tracker.track_func(transformation_with_error, silent=True, app1='AA', app2='BB')
print('-- end transformation')
print(error_tracker.list)

Use contextmanager to trap instructions for later execution

I want to achieve a pseudo-db-like transaction using context manager.
Take for example:
class Transactor:
def a(): pass
def b(d, b): pass
def c(i): pass
#contextmanager
def get_session(self):
txs = []
yield self # accumulate method calls
for tx in tx:
tx() # somehow pass the arguments
def main():
t = Transactor()
with t.get_session() as session:
session.a() # inserts `a` into `txs`
... more code ...
session.c(value) # inserts `c` and `(value)` into `txs`
session.b(value1, value2) # inserts `b` and `(value1, value2)` into `txs`
... more code ...
# non-transator related code
f = open('file.txt') # If this throws an exception,
# break out of the context manager,
# and discard previous transactor calls.
... more code ...
session.a() # inserts `a` into `txs`
session.b(x, y) # inserts `b` and `(x, y)` into `txs`
# Now is outside of context manager.
# The following calls should execute immediately
t.a()
t.b(x, y)
t.c(k)
If something goes wrong such as an exception, discard txs (rollback). If it makes it to the end of the context, execute each instruction in order of insertion and pass in the appropriate arguments.
How can to trap the method call for later execution?
And one extra caveat:
If get_session is not called, I want to execute the instructions immediately.
It's not pretty, but to follow the structure you're looking for you'd have to build a temporary transaction class that holds your function queues and execute it after the context manager exits. You'll need to use functools.partial, but there are some restrictions though:
All the queued up calls must be methods based on your "session" instance. Anything else gets executed right away.
I don't know how you want to handle non-callable session attributes, so for now I assume it'll just retrieve the value.
Having said that, here's my take on it:
from functools import partial
class TempTrans:
# pass in the object instance to mimic
def __init__(self, obj):
self._queue = []
# iterate through the attributes and methods within the object and its class
for attr, val in type(obj).__dict__.items() ^ obj.__dict__.items():
if not attr.startswith('_'):
if callable(val):
setattr(self, attr, partial(self._add, getattr(obj, attr)))
else:
# placeholder to handle non-callable attributes
setattr(self, attr, val)
# function to add to queue
def _add(self, func, *args, **kwargs):
self._queue.append(partial(func, *args, **kwargs))
# function to execute the queue
def _execute(self):
_remove = []
# iterate through the queue to call the functions.
# I suggest catching errors here in case your functions falls through
for func in self._queue:
try:
func()
_remove.append(func)
except Exception as e:
print('some error occured')
break
# remove the functions that were successfully ran
for func in _remove:
self._queue.remove(func)
Now onto the context manager (it will be outside your class, you can place it in as a class method if you wish):
#contextmanager
def temp_session(obj):
t = TempTrans(obj)
try:
yield t
t._execute()
print('Transactions successfully ran')
except:
print('Encountered errors, queue was not executed')
finally:
print(t._queue) # debug to see what's left of the queue
Usage:
f = Foo()
with temp_session(f) as session:
session.a('hello')
session.b(1, 2, 3)
# a hello
# b 1 2 3
# Transactions successfully ran
# []
with temp_session(f) as session:
session.a('hello')
session.b(1, 2, 3)
session.attrdoesnotexist # expect an error
# Encountered errors, queue was not executed
# [
# functools.partial(<bound method Foo.a of <__main__.Foo object at 0x0417D3B0>>, 'hello'),
# functools.partial(<bound method Foo.b of <__main__.Foo object at 0x0417D3B0>>, 1, 2, 3)
# ]
This solution was a bit contrived because of the way you wanted it structured, but if you didn't need a context manager and doesn't need the session to look like a direct function call, it's trivial to just use partial:
my_queue = []
# some session
my_queue.append(partial(f, a))
my_queue.append(partial(f, b))
for func in my_queue:
func()

Skipping execution of -with- block

I am defining a context manager class and I would like to be able to skip the block of code without raising an exception if certain conditions are met during instantiation. For example,
class My_Context(object):
def __init__(self,mode=0):
"""
if mode = 0, proceed as normal
if mode = 1, do not execute block
"""
self.mode=mode
def __enter__(self):
if self.mode==1:
print 'Exiting...'
CODE TO EXIT PREMATURELY
def __exit__(self, type, value, traceback):
print 'Exiting...'
with My_Context(mode=1):
print 'Executing block of codes...'
According to PEP-343, a with statement translates from:
with EXPR as VAR:
BLOCK
to:
mgr = (EXPR)
exit = type(mgr).__exit__ # Not calling it yet
value = type(mgr).__enter__(mgr)
exc = True
try:
try:
VAR = value # Only if "as VAR" is present
BLOCK
except:
# The exceptional case is handled here
exc = False
if not exit(mgr, *sys.exc_info()):
raise
# The exception is swallowed if exit() returns true
finally:
# The normal and non-local-goto cases are handled here
if exc:
exit(mgr, None, None, None)
As you can see, there is nothing obvious you can do from the call to the __enter__() method of the context manager that can skip the body ("BLOCK") of the with statement.
People have done Python-implementation-specific things, such as manipulating the call stack inside of the __enter__(), in projects such as withhacks. I recall Alex Martelli posting a very interesting with-hack on stackoverflow a year or two back (don't recall enough of the post off-hand to search and find it).
But the simple answer to your question / problem is that you cannot do what you're asking, skipping the body of the with statement, without resorting to so-called "deep magic" (which is not necessarily portable between python implementations). With deep magic, you might be able to do it, but I recommend only doing such things as an exercise in seeing how it might be done, never in "production code".
If you want an ad-hoc solution that uses the ideas from withhacks (specifically from AnonymousBlocksInPython), this will work:
import sys
import inspect
class My_Context(object):
def __init__(self,mode=0):
"""
if mode = 0, proceed as normal
if mode = 1, do not execute block
"""
self.mode=mode
def __enter__(self):
if self.mode==1:
print 'Met block-skipping criterion ...'
# Do some magic
sys.settrace(lambda *args, **keys: None)
frame = inspect.currentframe(1)
frame.f_trace = self.trace
def trace(self, frame, event, arg):
raise
def __exit__(self, type, value, traceback):
print 'Exiting context ...'
return True
Compare the following:
with My_Context(mode=1):
print 'Executing block of code ...'
with
with My_Context(mode=0):
print 'Executing block of code ... '
A python 3 update to the hack mentioned by other answers from
withhacks (specifically from AnonymousBlocksInPython):
class SkipWithBlock(Exception):
pass
class SkipContextManager:
def __init__(self, skip):
self.skip = skip
def __enter__(self):
if self.skip:
sys.settrace(lambda *args, **keys: None)
frame = sys._getframe(1)
frame.f_trace = self.trace
def trace(self, frame, event, arg):
raise SkipWithBlock()
def __exit__(self, type, value, traceback):
if type is None:
return # No exception
if issubclass(type, SkipWithBlock):
return True # Suppress special SkipWithBlock exception
with SkipContextManager(skip=True):
print('In the with block') # Won't be called
print('Out of the with block')
As mentioned before by joe, this is a hack that should be avoided:
The method trace() is called when a new local scope is entered, i.e. right when the code in your with block begins. When an exception is raised here it gets caught by exit(). That's how this hack works. I should add that this is very much a hack and should not be relied upon. The magical sys.settrace() is not actually a part of the language definition, it just happens to be in CPython. Also, debuggers rely on sys.settrace() to do their job, so using it yourself interferes with that. There are many reasons why you shouldn't use this code. Just FYI.
Based on #Peter's answer, here's a version that uses no string manipulations but should work the same way otherwise:
from contextlib import contextmanager
#contextmanager
def skippable_context(skip):
skip_error = ValueError("Skipping Context Exception")
prev_entered = getattr(skippable_context, "entered", False)
skippable_context.entered = False
def command():
skippable_context.entered = True
if skip:
raise skip_error
try:
yield command
except ValueError as err:
if err != skip_error:
raise
finally:
assert skippable_context.entered, "Need to call returned command at least once."
skippable_context.entered = prev_entered
print("=== Running with skip disabled ===")
with skippable_context(skip=False) as command:
command()
print("Entering this block")
print("... Done")
print("=== Running with skip enabled ===")
with skippable_context(skip=True) as command:
command()
raise NotImplementedError("... But this will never be printed")
print("... Done")
What you're trying to do isn't possible, unfortunately. If __enter__ raises an exception, that exception is raised at the with statement (__exit__ isn't called). If it doesn't raise an exception, then the return value is fed to the block and the block executes.
Closest thing I could think of is a flag checked explicitly by the block:
class Break(Exception):
pass
class MyContext(object):
def __init__(self,mode=0):
"""
if mode = 0, proceed as normal
if mode = 1, do not execute block
"""
self.mode=mode
def __enter__(self):
if self.mode==1:
print 'Exiting...'
return self.mode
def __exit__(self, type, value, traceback):
if type is None:
print 'Normal exit...'
return # no exception
if issubclass(type, Break):
return True # suppress exception
print 'Exception exit...'
with MyContext(mode=1) as skip:
if skip: raise Break()
print 'Executing block of codes...'
This also lets you raise Break() in the middle of a with block to simulate a normal break statement.
Context managers are not the right construct for this. You're asking for the body to be executed n times, in this case zero or one. If you look at the general case, n where n >= 0, you end up with a for loop:
def do_squares(n):
for i in range(n):
yield i ** 2
for x in do_squares(3):
print('square: ', x)
for x in do_squares(0):
print('this does not print')
In your case, which is more special purpose, and doesn't require binding to the loop variable:
def should_execute(mode=0):
if mode == 0:
yield
for _ in should_execute(0):
print('this prints')
for _ in should_execute(1):
print('this does not')
Another slightly hacky option makes use of exec. This is handy because it can be modified to do arbitrary things (e.g. memoization of context-blocks):
from contextlib import contextmanager
#contextmanager
def skippable_context_exec(skip):
SKIP_STRING = 'Skipping Context Exception'
old_value = skippable_context_exec.is_execed if hasattr(skippable_context_exec, 'is_execed') else False
skippable_context_exec.is_execed=False
command = "skippable_context_exec.is_execed=True; "+("raise ValueError('{}')".format(SKIP_STRING) if skip else '')
try:
yield command
except ValueError as err:
if SKIP_STRING not in str(err):
raise
finally:
assert skippable_context_exec.is_execed, "You never called exec in your context block."
skippable_context_exec.is_execed = old_value
print('=== Running with skip disabled ===')
with skippable_context_exec(skip=False) as command:
exec(command)
print('Entering this block')
print('... Done')
print('=== Running with skip enabled ===')
with skippable_context_exec(skip=True) as command:
exec(command)
print('... But this will never be printed')
print('... Done')
Would be nice to have something that gets rid of the exec without weird side effects, so if you can think of a way I'm all ears. The current lead answer to this question appears to do that but has some issues.

Categories