How to create a python class with a single use context - python

If we look at python docs it states:
Most context managers are written in a way that means they can only be used effectively in a with statement once. These single use context managers must be created afresh each time they’re used - attempting to use them a second time will trigger an exception or otherwise not work correctly.
This common limitation means that it is generally advisable to create context managers directly in the header of the with statement where they are used (as shown in all of the usage examples above).
Yet, the example most commonly shared for creating context managers inside classes is:
from contextlib import ContextDecorator
import logging
logging.basicConfig(level=logging.INFO)
class track_entry_and_exit(ContextDecorator):
def __init__(self, name):
self.name = name
def __enter__(self):
logging.info('Entering: %s', self.name)
def __exit__(self, exc_type, exc, exc_tb):
logging.info('Exiting: %s', self.name)
But, when I instantiate this class, I can pass it several times to a with statement:
In [8]: test_context = track_entry_and_exit('test')
In [9]: with test_context:
...: pass
...:
INFO:root:Entering: test
INFO:root:Exiting: test
In [10]: with test_context:
...: pass
...:
INFO:root:Entering: test
INFO:root:Exiting: test
How can I create a class that fails on the second call to the with statement?

Here is a possible solution:
from functools import wraps
class MultipleCallToCM(Exception):
pass
def single_use(cls):
if not ("__enter__" in vars(cls) and "__exit__" in vars(cls)):
raise TypeError(f"{cls} is not a Context Manager.")
org_new = cls.__new__
#wraps(org_new)
def new(clss, *args, **kwargs):
instance = org_new(clss)
instance._called = False
return instance
cls.__new__ = new
org_enter = cls.__enter__
#wraps(org_enter)
def enter(self):
if self._called:
raise MultipleCallToCM("You can't call this CM twice!")
self._called = True
return org_enter(self)
cls.__enter__ = enter
return cls
#single_use
class CM:
def __enter__(self):
print("Enter to the CM")
def __exit__(self, exc_type, exc_value, exc_tb):
print("Exit from the CM")
with CM():
print("Inside.")
print("-----------------------------------")
with CM():
print("Inside.")
print("-----------------------------------")
cm = CM()
with cm:
print("Inside.")
print("-----------------------------------")
with cm:
print("Inside.")
output:
Enter to the CM
Inside.
Exit from the CM
-----------------------------------
Enter to the CM
Inside.
Exit from the CM
-----------------------------------
Enter to the CM
Inside.
Exit from the CM
-----------------------------------
Traceback (most recent call last):
File "...", line 51, in <module>
with cm:
File "...", line 24, in enter
raise MultipleCallToCM("You can't call this CM twice!")
__main__.MultipleCallToCM: You can't call this CM twice!
I used a class decorator for it so that you can apply it to other context manager classes. I dispatched the __new__ method and give every instance a flag called __called, then change the original __enter__ to my enter which checks to see if this object has used in a with-statement or not.
How robust is this? I don't know. Seems like it works, I hope it gave an idea at least.

Arguably the simplest method is mentioned two paragraphs further down in the documentation you have cited:
Context managers created using contextmanager() are also single use context managers, and will complain about the underlying generator failing to yield if an attempt is made to use them a second time
Here is the corresponding invocation for your example:
>>> from contextlib import contextmanager
>>> #contextmanager
... def track_entry_and_exit(name):
... print('Entering', name)
... yield
... print('Exiting', name)
...
>>> c = track_entry_and_exit('test')
>>> with c:
... pass
...
Entering test
Exiting test
>>> with c:
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.9/contextlib.py", line 115, in __enter__
del self.args, self.kwds, self.func
AttributeError: args
It's even a class although it is written as a function:
>>> type(c)
<class 'contextlib._GeneratorContextManager'>

I suggest to consider iterable class instead of context manager, like this
class Iterable:
"""Iterable that can be iterated only once."""
def __init__(self, name):
self.name = name
self.it = iter([self])
def __iter__(self):
# code to acquire resource
print('enter')
yield next(self.it)
print('exit')
# code to release resource
def __repr__(self):
return f'{self.__class__.__name__}({self.name})'
It can be iterated only one
>>> it = Iterable('iterable')
>>> for item in it:
>>> print('entered', item)
enter
entered Iterable(iterable)
exit
>>> for item in it:
>>> print('entered', item)
RuntimeError: generator raised StopIteration
Context manager can be written in this manner:
class Context:
"""Context manager that can be used only once."""
def __init__(self, name):
self.name = name
self.it = iter([self])
def __enter__(self):
print('enter')
return next(self.it)
def __exit__(self, exc_type, exc, exc_tb):
print('exit')
def __repr__(self):
return f'{self.__class__.__name__}({self.name})'
It works only once
>>> ctx = Context('context')
>>> with ctx as c:
>>> print('entered', c)
enter
entered Context(context)
exit
>>> with ctx as c:
>>> print('entered', c)
enter
StopIteration:

Related

Mocking a class used in a with statement

I have a class which has an __exit__ and __enter__ function so that I can use it in a with statement, e.g.:
with ClassName() as c:
c.do_something()
I am now trying to write a unit test to test this. Basically, I am trying to test that do_something() has only been called once.
An example (which I called testmocking1):
class temp:
def __init__(self):
pass
def __enter__(self):
pass
def __exit__(self, exc_type, exc_val, exc_tb):
pass
def test_method(self):
return 1
def fun():
with temp() as t:
return t.test_method()
And my test:
import unittest
import test_mocking1
from test_mocking1 import fun
import mock
from mock import patch
class MyTestCase(unittest.TestCase):
#patch('test_mocking1.temp', autospec = True)
def test_fun_enter_called_once(self, mocked_object):
fun()
mocked_object.test_method.assert_called_once()
if __name__ == '__main__':
unittest.main()
So I would expect this to pass, because the test_method has been called exactly once in the function fun(). But the actual result that I get is:
======================================================================
FAIL: test_fun_enter_called_once (__main__.MyTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<path_to_virtual_env>\lib\site-packages\mock\mock.py", line 1305, in patched
return func(*args, **keywargs)
File "<File_with_test>", line 11, in test_fun_enter_called_once
mocked_object.test_method.assert_called_once()
File "<path_to_virtual_env>\lib\site-
packages\mock\mock.py", line 915, in assert_called_once
raise AssertionError(msg)
AssertionError: Expected 'test_method' to have been called once. Called 0 times.
How do I test whether a function in a class which is created using a with statement has been called (either once or multiple times), and (related) how do I set the results of those calls (using .side_effect or .return_value)?
The with statement takes whatever __enter__ returns to bind to the name in the as <name> part. You bound it to t:
with temp() as t:
t.test_method()
Note that temp() is called, so the with statement starts with temp.return_value. t is not temp.return_value either, it is whatever temp().__enter__() returns, so you need to use the return value for that call:
entered = mocked_object.return_value.__enter__.return_value
entered.test_method.assert_called_once()
Extending on this, if you want to alter what test_method() returns, do so on the return value of mocked_object.return_value.__enter__.return_value.
You can always print out the mock_calls() attribute of your object to see what has happened to it:
>>> from test_mocking1 import fun
>>> from mock import patch
>>> with patch('test_mocking1.temp', autospec = True) as mocked_object:
... fun()
...
>>> print(mocked_object.mock_calls)
[call(),
call().__enter__(),
call().__enter__().test_method(),
call().__exit__(None, None, None)]
>>> mocked_object.return_value.__enter__.return_value.test_method.called
True
>>> mocked_object.return_value.__enter__.return_value.test_method.call_count
1
Note that your actual implementation of temp.__enter__() returns None, so without mocking your fun() function fails with an attribute error.

How to find out if (the source code of) a function contains a call to a method from a specific module?

Let's say, I have a bunch of functions a, b, c, d and e and I want to find out if they call any method from the random module:
def a():
pass
def b():
import random
def c():
import random
random.randint(0, 1)
def d():
import random as ra
ra.randint(0, 1)
def e():
from random import randint as ra
ra(0, 1)
I want to write a function uses_module so I can expect these assertions to pass:
assert uses_module(a) == False
assert uses_module(b) == False
assert uses_module(c) == True
assert uses_module(d) == True
assert uses_module(e) == True
(uses_module(b) is False because random is only imported but never one of its methods called.)
I can't modify a, b, c, d and e. So I thought it might be possible to use ast for this and walk along the function's code which I get from inspect.getsource. But I'm open to any other proposals, this was only an idea how it could work.
This is as far as I've come with ast:
def uses_module(function):
import ast
import inspect
nodes = ast.walk(ast.parse(inspect.getsource(function)))
for node in nodes:
print(node.__dict__)
This is a work in progress, but perhaps it will spark a better idea. I am using the types of nodes in the AST to attempt to assert that a module is imported and some function it provides is used.
I have added what may be the necessary pieces to determine that this is the case to a checker defaultdict which can be evaluated for some set of conditions, but I am not using all key value pairs to establish an assertion for your use cases.
def uses_module(function):
"""
(WIP) assert that a function uses a module
"""
import ast
import inspect
nodes = ast.walk(ast.parse(inspect.getsource(function)))
checker = defaultdict(set)
for node in nodes:
if type(node) in [ast.alias, ast.Import, ast.Name, ast.Attribute]:
nd = node.__dict__
if type(node) == ast.alias:
checker['alias'].add(nd.get('name'))
if nd.get('name') and nd.get('asname'):
checker['name'].add(nd.get('name'))
checker['asname'].add(nd.get('asname'))
if nd.get('ctx') and nd.get('attr'):
checker['attr'].add(nd.get('attr'))
if nd.get('id'):
checker['id'].add(hex(id(nd.get('ctx'))))
if nd.get('value') and nd.get('ctx'):
checker['value'].add(hex(id(nd.get('ctx'))))
# print(dict(checker)) for debug
# This check passes your use cases, but probably needs to be expanded
if checker.get('alias') and checker.get('id'):
return True
return False
You can replace the random module with a mock object, providing custom attribute access and hence intercepting function calls. Whenever one of the functions tries to import (from) random it will actually access the mock object. The mock object can also be designed as a context manager, handing back the original random module after the test.
import sys
class Mock:
import random
random = random
def __enter__(self):
sys.modules['random'] = self
self.method_called = False
return self
def __exit__(self, *args):
sys.modules['random'] = self.random
def __getattr__(self, name):
def mock(*args, **kwargs):
self.method_called = True
return getattr(self.random, name)
return mock
def uses_module(func):
with Mock() as m:
func()
return m.method_called
Variable module name
A more flexible way, specifying the module's name, is achieved by:
import importlib
import sys
class Mock:
def __init__(self, name):
self.name = name
self.module = importlib.import_module(name)
def __enter__(self):
sys.modules[self.name] = self
self.method_called = False
return self
def __exit__(self, *args):
sys.modules[self.name] = self.module
def __getattr__(self, name):
def mock(*args, **kwargs):
self.method_called = True
return getattr(self.module, name)
return mock
def uses_module(func):
with Mock('random') as m:
func()
return m.method_called
You can simply place a mock random.py in your local (test) directory containing the following code:
# >= Python 3.7.
def __getattr__(name):
def mock(*args, **kwargs):
raise RuntimeError(f'{name}: {args}, {kwargs}') # For example.
return mock
# <= Python 3.6.
class Wrapper:
def __getattr__(self, name):
def mock(*args, **kwargs):
raise RuntimeError('{}: {}, {}'.format(name, args, kwargs)) # For example.
return mock
import sys
sys.modules[__name__] = Wrapper()
Then you simply test your functions as follows:
def uses_module(func):
try:
func()
except RuntimeError as err:
print(err)
return True
return False
This works because instead of importing the builtin random module it will go for the mock module which emulates custom attribute access and hence can intercept the function calls.
If you don't want to interrupt the functions by raising an exception you can still use the same approach, by importing the original random module in the mock module (modifying sys.path appropriately) and then falling back on the original functions.

Python multiple context managers in one class

I would like to be able to write code like this:
with obj.in_batch_mode:
obj.some_attr = "some_value"
obj.some_int = 142
...
when I want obj to wait with sending updates about itself until multiple jobs are completed. I have hooks on __setattr__ that take some time to run, and the changes can be sent together.
I do not want to use code like this, since it increases the risk of forgetting to leave batch_mode (which is what the with keyword is good for):
obj.enter_batch_mode()
obj.some_attr = "some_value"
obj.some_int = 142
...
obj.exit_batch_mode()
I have not been able to figure out how to implement this. Just typing with obj: (and simply implementing with on obj) does not read anywhere near as descriptive.
Generally, a very simple way to implement context managers is to use the contextlib module. Writing a context manager becomes as simple as writing a single yield generator. Before the yield replaces the __enter__ method, the object yielded is the return value of __enter__, and the section after the yield is the __exit__ method. Any function on your class can be a context manager, it just needs the be decorated as such. For instance, take this simple ConsoleWriter class:
from contextlib import contextmanager
from sys import stdout
from io import StringIO
from functools import partial
class ConsoleWriter:
def __init__(self, out=stdout, fmt=None):
self._out = out
self._fmt = fmt
#property
#contextmanager
def batch(self):
original_out = self._out
self._out = StringIO()
try:
yield self
except Exception as e:
# There was a problem. Ignore batch commands.
# (do not swallow the exception though)
raise
else:
# no problem
original_out.write(self._out.getvalue())
finally:
self._out = original_out
#contextmanager
def verbose(self, fmt="VERBOSE: {!r}"):
original_fmt = self._fmt
self._fmt = fmt
try:
yield self
finally:
# don't care about errors, just restore end
self._fmt = original_fmt
def __getattr__(self, attr):
"""creates function that writes capitalised attribute three times"""
return partial(self.write, attr.upper()*3)
def write(self, arg):
if self._fmt:
arg = self._fmt.format(arg)
print(arg, file=self._out)
Example usage:
writer = ConsoleWriter()
with writer.batch:
print("begin batch")
writer.a()
writer.b()
with writer.verbose():
writer.c()
print("before reentrant block")
with writer.batch:
writer.d()
print("after reentrant block")
print("end batch -- all data is now flushed")
Outputing:
begin batch
before reentrant block
after reentrant block
end batch -- all data is now flushed
AAA
BBB
VERBOSE: 'CCC'
DDD
If you are after a simple solution and do not need any nested mode-change (e.g. from STD to BATCH to VERBOSE back to BATCH back to STD)
class A(object):
STD_MODE = 'std'
BATCH_MODE = 'batch'
VERBOSE_MODE = 'verb'
def __init__(self):
self.mode = self.STD_MODE
def in_mode(self, mode):
self.mode = mode
return self
def __enter__(self):
return self
def __exit__(self, type, value, tb):
self.mode = self.STD_MODE
obj = A()
print obj.mode
with obj.in_mode(obj.BATCH_MODE) as x:
print x.mode
print obj.mode
outputs
std
batch
std
This builds on Pynchia's answer, but adds support for multiple modes and allows nesting of with statements, even with the same mode multiple times. It scales O(#nested_modes) which is basically O(1).
Just remember to use stacks for data storage related to the modes.
class A():
_batch_mode = "batch_mode"
_mode_stack = []
#property
def in_batch_mode(self):
self._mode_stack.append(self._batch_mode)
return self
def __enter__(self):
return self
def __exit__(self, type, value, tb):
self._mode_stack.pop()
if self._batch_mode not in self._mode_stack:
self.apply_edits()
and then I have these checks wherever I need them:
if self._batch_mode not in self._mode_stack:
self.apply_edits()
It is also possible to use methods for modes:
with x.in_some_mode(my_arg):
just remember to save my_arg in a stack within x, and to clear it from the stack when that mode is popped from the mode stack.
The code using this object can now be
with obj.in_batch_mode:
obj.some_property = "some_value"
and there are no problems with nesting, so we can add another with obj.in_some_mode: wherever without any hard-to-debug errors or having to check every function called to make sure the object's with-statements are never nested:
def b(obj):
with obj.in_batch_mode:
obj.some_property = "some_value"
x = A()
with x.in_batch_mode:
x.my_property = "my_value"
b(x)
Maybe something like this:
Implement helper class
class WithHelperObj(object):
def __init__(self,obj):
self.obj = obj
def __enter__(self):
self.obj.impl_enter_batch()
def __exit__(self, exc_type, exc_value, traceback):
self.obj.impl_exit_batch()
class MyObject(object):
def in_batch_mode(self):
return WithHelperObj(self)
In the class itself, implement method instead of field, to use with the with statement
def impl_enter_batch(self):
print 'In impl_enter_batch'
def impl_exit_batch(self):
print 'In impl_exit_batch'
def doing(self):
print 'doing'
Then use it:
o = MyObject()
with o.in_batch_mode():
o.doing()

Implementing a state machine with decorators

While learning the concepts of decorators in python I came to the question if it is possible to use decorators to simulate a state machine.
Example:
from enum import Enum
class CoffeeMachine(object):
def __init__(self):
self.state = CoffeeState.Initial
##Statemachine(shouldbe, willbe)
#Statemachine(CoffeeState.Initial, CoffeeState.Grounding)
def ground_beans(self):
print("ground_beans")
#Statemachine(CoffeeState.Grounding, CoffeeState.Heating)
def heat_water(self):
print("heat_water")
#Statemachine(CoffeeState.Heating, CoffeeState.Pumping)
def pump_water(self):
print("pump_water")
class CoffeeState(Enum):
Initial = 0
Grounding = 1
Heating = 2
Pumping = 3
So all the statemachine does is to check if my current state is the requested one, if it is, it should call the underlying function and lastly it should set the state further.
How would you implement this?
Sure you can, provided your decorator makes an assumption about where the state is stored:
from functools import wraps
class StateMachineWrongState(Exception):
def __init__(self, shouldbe, current):
self.shouldbe = shouldbe
self.current = current
super().__init__((shouldbe, current))
def statemachine(shouldbe, willbe):
def decorator(f):
#wraps(f)
def wrapper(self, *args, **kw):
if self.state != shouldbe:
raise StateMachineWrongState(shouldbe, self.state)
try:
return f(self, *args, **kw)
finally:
self.state = willbe
return wrapper
return decorator
The decorator expects to get self passed in; i.e. it should be applied to methods in a class. It then expects self to have a state attribute to track the state machine state.
Demo:
>>> cm = CoffeeMachine()
>>> cm.state
<CoffeeState.Initial: 0>
>>> cm.ground_beans()
ground_beans
>>> cm.state
<CoffeeState.Grounding: 1>
>>> cm.ground_beans()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 6, in wrapper
__main__.StateMachineWrongState: (<CoffeeState.Initial: 0>, <CoffeeState.Grounding: 1>)
>>> cm.heat_water()
heat_water
>>> cm.pump_water()
pump_water
>>> cm.state
<CoffeeState.Pumping: 3>

Nesting Python context managers

In this question, I defined a context manager that contains a context manager. What is the easiest correct way to accomplish this nesting? I ended up calling self.temporary_file.__enter__() in self.__enter__(). However, in self.__exit__, I am pretty sure I have to call self.temporary_file.__exit__(type_, value, traceback) in a finally block in case an exception is raised. Should I be setting the type_, value, and traceback parameters if something goes wrong in self.__exit__? I checked contextlib, but couldn't find any utilities to help with this.
Original code from question:
import itertools as it
import tempfile
class WriteOnChangeFile:
def __init__(self, filename):
self.filename = filename
def __enter__(self):
self.temporary_file = tempfile.TemporaryFile('r+')
self.f = self.temporary_file.__enter__()
return self.f
def __exit__(self, type_, value, traceback):
try:
try:
with open(self.filename, 'r') as real_f:
self.f.seek(0)
overwrite = any(
l != real_l
for l, real_l in it.zip_longest(self.f, real_f))
except IOError:
overwrite = True
if overwrite:
with open(self.filename, 'w') as real_f:
self.f.seek(0)
for l in self.f:
real_f.write(l)
finally:
self.temporary_file.__exit__(type_, value, traceback)
The easy way to create context managers is with contextlib.contextmanager. Something like this:
#contextlib.contextmanager
def write_on_change_file(filename):
with tempfile.TemporaryFile('r+') as temporary_file:
yield temporary_file
try:
... some saving logic that you had in __exit__ ...
Then use with write_on_change_file(...) as f:.
The body of the with statement will be executed “instead of” the yield. Wrap the yield itself in a try block if you want to catch any exceptions that happen in the body.
The temporary file will always be properly closed (when its with block ends).
contextlib.contextmanager works great for functions, but when I need a classes as context manager, I'm using the following util:
class ContextManager(metaclass=abc.ABCMeta):
"""Class which can be used as `contextmanager`."""
def __init__(self):
self.__cm = None
#abc.abstractmethod
#contextlib.contextmanager
def contextmanager(self):
raise NotImplementedError('Abstract method')
def __enter__(self):
self.__cm = self.contextmanager()
return self.__cm.__enter__()
def __exit__(self, exc_type, exc_value, traceback):
return self.__cm.__exit__(exc_type, exc_value, traceback)
This allow to declare contextmanager classes with the generator syntax from #contextlib.contextmanager. It makes it much more natural to nest contextmanager, without having to manually call __enter__ and __exit__. Example:
class MyClass(ContextManager):
def __init__(self, filename):
self._filename = filename
#contextlib.contextmanager
def contextmanager(self):
with tempfile.TemporaryFile() as temp_file:
yield temp_file
... # Post-processing you previously had in __exit__
with MyClass('filename') as x:
print(x)
I wish this was in the standard library...

Categories