With statement destructor to catch init exceptions - python

I use a with statement with the following class.
def __init__(self):
...
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
print "EXIT Shutting the SDK down"
ret = self.sdkobject.ShutDown()
self.error_check(ret)
This catches any error that occur when I am using the object of the class and safely shuts down the SDK that I am using. However, it catch problems when the class is still initializing. I have recently found the "del" function which neatly solves this problem. However, it can't be used in conjunction with the exit function (as the with statement evokes the exit and the del gets an exception). How can I set up a destructor using a with statemtent, which will catch failures even during initialization?

Exceptions in the __init__ need to be dealt with directly in that method:
class YourContextManager(object):
sdkobject = None
def __init__(self):
try:
self._create_sdk_object()
except Exception:
if self.sdkobject is not None:
self.sdkobject.ShutDown()
raise
def _create_sdk_object(self):
self.sdkobject = SomeSDKObject()
self.sdkobject.do_something_that_could_raise_an_exception()
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
print "EXIT Shutting the SDK down"
ret = self.sdkobject.ShutDown()
self.error_check(ret)
Note that the exception is re-raised; you want to give the consumer of the context manager an opportunity to handle the failure to create a context manager.

Create a separate shutdown function that gets called in the try/except block of the __init__ and wherever else you need a proper shutdown.

Catch the exception in __init__ and handle it. __del__ is unnecessary.

Related

Should the "opening work" of a context manager happen in __init__ or __enter__?

I found the following example of a Context Manager for a File object:
class File(object):
def __init__(self, file_name, method):
self.file_obj = open(file_name, method)
def __enter__(self):
return self.file_obj
def __exit__(self, type, value, traceback):
self.file_obj.close()
Here, the work done by the manager, that is actually opening the file, happens in the __init__ method. However, in the accompanying text, they suggest that the file opening should happen in the __enter__ call:
Let’s talk about what happens under-the-hood.
The with statement stores the exit method of the File class.
It calls the enter method of the File class.
The enter method opens the file and returns it.
The opened file handle is passed to opened_file.
We write to the file using .write().
The with statement calls the stored exit method.
The exit method closes the file.
Which is the correct approach in general? It seems to be that the work undone by __exit__ should happen in __enter__, not __init__ since those are paired 1:1 by the context manager mechanism, but this example leaves me doubtful.
There is no general answer. It depends on what the work is. For example, for a file, opening happens in __init__, but for a lock, locking happens in __enter__.
One important thing to think about is, what should happen if the object is not used as a context manager, or not immediately used as a context manager? What should the object's state be after construction? Should relevant resources already be acquired at that point?
For a file, the answer is yes, the file should already be open, so opening happens in __init__. For a lock, the answer is no, the lock should not be locked, so locking goes in __enter__.
Another thing to consider is, should this object be usable as a context manager more than once? If entering a context manager twice should do a thing twice, that thing needs to happen in __enter__.
Hereis a better example
class TraceBlock:
def message(self, arg):
print('running ' + arg)
def __enter__(self):
print('starting with block')
return self
def __exit__(self, exc_type, exc_value, exc_tb):
if exc_type is None:
print('exited normally\n')
else:
print('raise an exception! ' + str(exc_type))
return False # Propagate
#--------------------------
if __name__ == '__main__':
with TraceBlock() as action:
action.message('test 1')
print('reached')
with TraceBlock() as action:
action.message('test 2')
raise TypeError
print('not reached')
If The Exit returns False the exception will pass on to other handler, if it return True the exception would not go to others.

Coroutine that is guaranteed to exit its context managers

I'd like to use a context manager within a coroutine. This coroutine should handle unknown number of steps. However, due to unknown number of steps, it's unclear when should the context manager exit. I'd like it to exit when the co-routine goes out of scope / is garbage collected; however this seems not to happen in the example below:
import contextlib
#contextlib.contextmanager
def cm():
print("STARTED")
yield
print("ENDED")
def coro(a: str):
with cm():
print(a)
while True:
val1, val2 = yield
print(val1, val2)
c = coro("HI")
c.send(None)
print("---")
c.send((1, 2))
print("---!")
Output of this program:
STARTED
HI
---
1 2
---!
The context manager never printed "ENDED".
How can I make a coroutine that will support any number of steps, and be guaranteed to exit gracefully? I don't want to make this a responsibility of the caller.
TLDR: So the issue is that when an exception is raised (and not handled) inside a with block. The __exit__ method of the context manager is called with that exception. For contextmanager-decorated generators, this causes the exception to be thrown to the generator. cm does not handle this exception and thus the cleanup code is not run. When coro is garbage collected, its close method is called which throws a GeneratorExit to coro (which then gets thrown to cm). What follows is a detailed description of the above steps.
The close method throws a GeneratorExit to coro which means a GeneratorExit is raised at the point of yield. coro doesn't handle the GeneratorExit so it exits the context via an error. This causes the __exit__ method of the context to be called with an error and error information. What does the __exit__ method from a contextmanager-decorated generator do? If it is called with an exception, it throws that exception to the underlying generator.
At this point the a GeneratorExit is raised from the yield statement in the body of our context manager. That unhandled exception causes the cleanup code to not be run. That unhandled exception is raised by context manager and is passed back to the __exit__ of the contextmanager decorator. Being the same error that was thrown, __exit__ returns False to indicate the original error sent to __exit__ was unhandled.
Finally, this continues the GeneratorExit's propagation outside of the with block inside coro where it continues to be unhandled. However, not handling GeneratorExits is regular for generators, so the original close method suppresses the GeneratorExit.
See this part of the yield documentation:
If the generator is not resumed before it is finalized (by reaching a zero reference count or by being garbage collected), the generator-iterator’s close() method will be called, allowing any pending finally clauses to execute.
Looking at the close documentation we see:
Raises a GeneratorExit at the point where the generator function was paused. If the generator function then exits gracefully, is already closed, or raises GeneratorExit (by not catching the exception), close returns to its caller.
This part of the with statement documentation:
The suite is executed.
The context manager’s exit() method is invoked. If an exception caused the suite to be exited, its type, value, and traceback are passed as arguments to exit(). Otherwise, three None arguments are supplied.
And the code of the __exit__ method for the contextmanager decorator.
So with all this context (rim-shot), the easiest way we can get the desired behavior is with a try-except-finally in the definition of our context manager. This is the suggested method from the contextlib docs. And all their examples follow this form.
Thus, you can use a try…except…finally statement to trap the error (if any), or ensure that some cleanup takes place.
import contextlib
#contextlib.contextmanager
def cm():
try:
print("STARTED")
yield
except Exception:
raise
finally:
print("ENDED")
def coro(a: str):
with cm():
print(a)
while True:
val1, val2 = yield
print(val1, val2)
c = coro("HI")
c.send(None)
print("---")
c.send((1, 2))
print("---!")
The output is now:
STARTED
HI
---
1 2
---!
ENDED
as desired.
We could also define our context manager in the traditional manner: as a class with an __enter__ and __exit__ method and still gotten the correct behavior:
class CM:
def __enter__(self):
print('STARTED')
def __exit__(self, exc_type, exc_value, traceback):
print('ENDED')
return False
The situation is somewhat simpler, because we can see exactly what the __exit__ method is without having to go to the source code. The GeneratorExit gets sent (as a parameter) to __exit__ where __exit__ happily runs its cleanup code and then returns False. This is not strictly necessary as otherwise None (another Falsey value) would have been returned, but it indicates that any exception that was sent to __exit__ was not handled. (The return value of __exit__ doesn't matter if there was no exception).
You can do it by telling the coroutine to shutdown by sending it something the will cause it to break out of the loop and return as illustrated below. Doing so will cause a StopIteration exception to be raised where this is done, so I added another context manager to allow it to be suppressed. Note I have also added a coroutine decorator to make them start-up automatically when first called, but that part is strictly optional.
import contextlib
from typing import Callable
QUIT = 'quit'
def coroutine(func: Callable):
""" Decorator to make coroutines automatically start when called. """
def start(*args, **kwargs):
cr = func(*args, **kwargs)
next(cr)
return cr
return start
#contextlib.contextmanager
def ignored(*exceptions):
try:
yield
except exceptions:
pass
#contextlib.contextmanager
def cm():
print("STARTED")
yield
print("ENDED")
#coroutine
def coro(a: str):
with cm():
print(a)
while True:
value = (yield)
if value == QUIT:
break
val1, val2 = value
print(val1, val2)
print("---")
with ignored(StopIteration):
c = coro("HI")
#c.send(None) # No longer needed.
c.send((1, 2))
c.send((3, 5))
c.send(QUIT) # Tell coroutine to clean itself up and exit.
print("---!")
Output:
STARTED
HI
---
1 2
3 5
ENDED
---!

Python How to force object instantiation via Context Manager?

I want to force object instantiation via class context manager. So make it impossible to instantiate directly.
I implemented this solution, but technically user can still instantiate object.
class HessioFile:
"""
Represents a pyhessio file instance
"""
def __init__(self, filename=None, from_context_manager=False):
if not from_context_manager:
raise HessioError('HessioFile can be only use with context manager')
And context manager:
#contextmanager
def open(filename):
"""
...
"""
hessfile = HessioFile(filename, from_context_manager=True)
Any better solution ?
If you consider that your clients will follow basic python coding principles then you can guarantee that no method from your class will be called if you are not within the context.
Your client is not supposed to call __enter__ explicitly, therefore if __enter__ has been called you know your client used a with statement and is therefore inside context (__exit__ will be called).
You just need to have a boolean variable that helps you remember if you are inside or outside context.
class Obj:
def __init__(self):
self._inside_context = False
def __enter__(self):
self._inside_context = True
print("Entering context.")
return self
def __exit__(self, *exc):
print("Exiting context.")
self._inside_context = False
def some_stuff(self, name):
if not self._inside_context:
raise Exception("This method should be called from inside context.")
print("Doing some stuff with", name)
def some_other_stuff(self, name):
if not self._inside_context:
raise Exception("This method should be called from inside context.")
print("Doing some other stuff with", name)
with Obj() as inst_a:
inst_a.some_stuff("A")
inst_a.some_other_stuff("A")
inst_b = Obj()
with inst_b:
inst_b.some_stuff("B")
inst_b.some_other_stuff("B")
inst_c = Obj()
try:
inst_c.some_stuff("c")
except Exception:
print("Instance C couldn't do stuff.")
try:
inst_c.some_other_stuff("c")
except Exception:
print("Instance C couldn't do some other stuff.")
This will print:
Entering context.
Doing some stuff with A
Doing some other stuff with A
Exiting context.
Entering context.
Doing some stuff with B
Doing some other stuff with B
Exiting context.
Instance C couldn't do stuff.
Instance C couldn't do some other stuff.
Since you'll probably have many methods that you want to "protect" from being called from outside context, then you can write a decorator to avoid repeating the same code to test for your boolean:
def raise_if_outside_context(method):
def decorator(self, *args, **kwargs):
if not self._inside_context:
raise Exception("This method should be called from inside context.")
return method(self, *args, **kwargs)
return decorator
Then change your methods to:
#raise_if_outside_context
def some_other_stuff(self, name):
print("Doing some other stuff with", name)
I suggest the following approach:
class MainClass:
def __init__(self, *args, **kwargs):
self._class = _MainClass(*args, **kwargs)
def __enter__(self):
print('entering...')
return self._class
def __exit__(self, exc_type, exc_val, exc_tb):
# Teardown code
print('running exit code...')
pass
# This class should not be instantiated directly!!
class _MainClass:
def __init__(self, attribute1, attribute2):
self.attribute1 = attribute1
self.attribute2 = attribute2
...
def method(self):
# execute code
if self.attribute1 == "error":
raise Exception
print(self.attribute1)
print(self.attribute2)
with MainClass('attribute1', 'attribute2') as main_class:
main_class.method()
print('---')
with MainClass('error', 'attribute2') as main_class:
main_class.method()
This will outptut:
entering...
attribute1
attribute2
running exit code...
---
entering...
running exit code...
Traceback (most recent call last):
File "scratch_6.py", line 34, in <module>
main_class.method()
File "scratch_6.py", line 25, in method
raise Exception
Exception
None that I am aware of. Generally, if it exists in python, you can find a way to call it. A context manager is, in essence, a resource management scheme... if there is no use-case for your class outside of the manager, perhaps the context management could be integrated into the methods of the class? I would suggest checking out the atexit module from the standard library. It allows you to register cleanup functions much in the same way that a context manager handles cleanup, but you can bundle it into your class, such that each instantiation has a registered cleanup function. Might help.
It is worth noting that no amount of effort will prevent people from doing stupid things with your code. Your best bet is generally to make it as easy as possible for people to do smart things with your code.
You can think of hacky ways to try and enforce this (like inspecting the call stack to forbid direct calls to your object, boolean attribute that is set upon __enter__ that you check before allowing other actions on the instance) but that will eventually become a mess to understand and explain to others.
Irregardless, you should also be certain that people will always find ways to bypass it if wanted. Python doesn't really tie your hands down, if you want to do something silly it lets you do it; responsible adults, right?
If you need an enforcement, you'd be better off supplying it as a documentation notice. That way if users opt to instantiate directly and trigger unwanted behavior, it's their fault for not following guidelines for your code.

Understanding purpose of returning self in context manager class

Trying to understanding how context managers work to catch errors, but more specifically the role of the __enter__() method in a class created to be used as a context manager, how it works in the 'error catching' process here, and why it is that self is all that's returned in the __enter__() method.
Given the following use of a context manager to catch an error:
import unittest
class InvoiceCalculatorTests(unittest.TestCase):
def test_no_pay(self):
with self.assertRaises(ValueError):
pay = divide_pay(0, {"Alice": 3.0, "Bob": 3.0, "Carol": 6.0})
Here is what I believe is the source code for assertRaises:
class _AssertRaisesContext(_AssertRaisesBaseContext):
"""A context manager used to implement TestCase.assertRaises* methods."""
_base_type = BaseException
_base_type_str = 'an exception type or tuple of exception types'
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, tb):
if exc_type is None:
try:
exc_name = self.expected.__name__
except AttributeError:
exc_name = str(self.expected)
if self.obj_name:
self._raiseFailure("{} not raised by {}".format(exc_name,
self.obj_name))
else:
self._raiseFailure("{} not raised".format(exc_name))
else:
traceback.clear_frames(tb)
if not issubclass(exc_type, self.expected):
# let unexpected exceptions pass through
return False
# store exception, without traceback, for later retrieval
self.exception = exc_value.with_traceback(None)
if self.expected_regex is None:
return True
expected_regex = self.expected_regex
if not expected_regex.search(str(exc_value)):
self._raiseFailure('"{}" does not match "{}"'.format(
expected_regex.pattern, str(exc_value)))
return True
I've tried going through PEP-0343 to gain some insight, but it's a bit beyond my current knowledge/understanding to make sense of what's contained therein. Could someone explain, in relative layman's terms, the role of __enter__() and __exit__() in the process of 'catching' the ValueError here and why it is that __enter__() is just returning self?
__enter__() is for setup, This particular context manager doesn't require any setup. So all it has to do is return object to be specified in the as clause, which is just itself.

Encapsulating retries into `with` block

I'm looking to encapsulate logic for database transactions into a with block; wrapping the code in a transaction and handling various exceptions (locking issues). This is simple enough, however I'd like to also have the block encapsulate the retrying of the code block following certain exceptions. I can't see a way to package this up neatly into the context manager.
Is it possible to repeat the code within a with statement?
I'd like to use it as simply as this, which is really neat.
def do_work():
...
# This is ideal!
with transaction(retries=3):
# Atomic DB statements
...
...
I'm currently handling this with a decorator, but I'd prefer to offer the context manager (or in fact both), so I can choose to wrap a few lines of code in the with block instead of an inline function wrapped in a decorator, which is what I do at the moment:
def do_work():
...
# This is not ideal!
#transaction(retries=3)
def _perform_in_transaction():
# Atomic DB statements
...
_perform_in_transaction()
...
Is it possible to repeat the code within a with statement?
No.
As pointed out earlier in that mailing list thread, you can reduce a bit of duplication by making the decorator call the passed function:
def do_work():
...
# This is not ideal!
#transaction(retries=3)
def _perform_in_transaction():
# Atomic DB statements
...
# called implicitly
...
The way that occurs to me to do this is just to implement a standard database transaction context manager, but allow it to take a retries argument in the constructor. Then I'd just wrap that up in your method implementations. Something like this:
class transaction(object):
def __init__(self, retries=0):
self.retries = retries
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, traceback):
pass
# Implementation...
def execute(self, query):
err = None
for _ in range(self.retries):
try:
return self._cursor.execute(query)
except Exception as e:
err = e # probably ought to save all errors, but hey
raise err
with transaction(retries=3) as cursor:
cursor.execute('BLAH')
As decorators are just functions themselves, you could do the following:
with transaction(_perform_in_transaction, retries=3) as _perf:
_perf()
For the details, you'd need to implement transaction() as a factory method that returns an object with __callable__() set to call the original method and repeat it up to retries number of times on failure; __enter__() and __exit__() would be defined as normal for database transaction context managers.
You could alternatively set up transaction() such that it itself executes the passed method up to retries number of times, which would probably require about the same amount of work as implementing the context manager but would mean actual usage would be reduced to just transaction(_perform_in_transaction, retries=3) (which is, in fact, equivalent to the decorator example delnan provided).
While I agree it can't be done with a context manager... it can be done with two context managers!
The result is a little awkward, and I am not sure whether I approve of my own code yet, but this is what it looks like as the client:
with RetryManager(retries=3) as rm:
while rm:
with rm.protect:
print("Attempt #%d of %d" % (rm.attempt_count, rm.max_retries))
# Atomic DB statements
There is an explicit while loop still, and not one, but two, with statements, which leaves a little too much opportunity for mistakes for my liking.
Here's the code:
class RetryManager(object):
""" Context manager that counts attempts to run statements without
exceptions being raised.
- returns True when there should be more attempts
"""
class _RetryProtector(object):
""" Context manager that only raises exceptions if its parent
RetryManager has given up."""
def __init__(self, retry_manager):
self._retry_manager = retry_manager
def __enter__(self):
self._retry_manager._note_try()
return self
def __exit__(self, exc_type, exc_val, traceback):
if exc_type is None:
self._retry_manager._note_success()
else:
# This would be a good place to implement sleep between
# retries.
pass
# Suppress exception if the retry manager is still alive.
return self._retry_manager.is_still_trying()
def __init__(self, retries=1):
self.max_retries = retries
self.attempt_count = 0 # Note: 1-based.
self._success = False
self.protect = RetryManager._RetryProtector(self)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, traceback):
pass
def _note_try(self):
self.attempt_count += 1
def _note_success(self):
self._success = True
def is_still_trying(self):
return not self._success and self.attempt_count < self.max_retries
def __bool__(self):
return self.is_still_trying()
Bonus: I know you don't want to separate your work off into separate functions wrapped with decorators... but if you were happy with that, the redo package from Mozilla offers the decorators to do that, so you don't have to roll your own. There is even a Context Manager that effective acts as temporary decorator for your function, but it still relies on your retrievable code to be factored out into a single function.
This question is a few years old but after reading the answers I decided to give this a shot.
This solution requires the use of a "helper" class, but I I think it does provide an interface with retries configured through a context manager.
class Client:
def _request(self):
# do request stuff
print("tried")
raise Exception()
def request(self):
retry = getattr(self, "_retry", None)
if not retry:
return self._request()
else:
for n in range(retry.tries):
try:
return self._request()
except Exception:
retry.attempts += 1
class Retry:
def __init__(self, client, tries=1):
self.client = client
self.tries = tries
self.attempts = 0
def __enter__(self):
self.client._retry = self
def __exit__(self, *exc):
print(f"Tried {self.attempts} times")
del self.client._retry
>>> client = Client()
>>> with Retry(client, tries=3):
... # will try 3 times
... response = client.request()
tried once
tried once
tried once
Tried 3 times

Categories