My question is, how can I execute any context manager without using with?
Python has the idea of context managers,
instead of
file = open('some_file', 'w')
try:
file.write('Hola!')
finally:
file.close()
# end try
you can write
with open('some_file', 'w') as opened_file:
opened_file.write('Hola!')
# end with
While in most cases the second one is the golden solution, however for the specific cases of testing in unit tests as well exploring in the interactive console, the first one can be much better used, as you can write it line by line.
>>> file = open('some_file', 'w')
>>> file.write('Hola!')
>>> file.close()
My question is, how can I execute any with context manager like this, best suited for exploring?
My actual use case follows below, but please try to give a answer which is generic and will work for other context managers too.
import flask
app = flask.Flask(__name__)
with app.test_request_context('/?name=Peter'):
assert flask.request.path == '/'
assert flask.request.args['name'] == 'Peter'
from flask docs
You can still use with syntax in the interactive console, however a context is based on 2 magic methods __enter__ and __exit__, so you can just use them:
class MyCtx(object):
def __init__(self, f):
self.f = f
def __enter__(self):
print("Enter")
return self.f
def __exit__(*args, **kwargs):
print("Exit")
def foo():
print("Hello")
usually you do:
with MyCtx(foo) as f:
f()
Same as:
ctx = MyCtx(foo)
f = ctx.__enter__()
f()
ctx.__exit__()
Here you have the live example
Remember that contexts __exit__ method are used for managing errors within the context, so most of them have a signature of __exit__(exception_type, exception_value, traceback), if you dont need to handle it for the tests, just give it some None values:
__exit__(None, None, None)
You can call app.test_request.context('/?name=Peter') to a variable (e.g. ctx), and then call ctx.__enter__() on it to enter the context manager, and ctx.__exit__(None, None, None) to perform the cleanup. Note that you lose the safety guarantees of context managers, unless you put the ctx.__exit__ in a finally clause.
Related
I have a database handler that utilizes SQLAlchemy ORM to communicate with a database. As part of SQLAlchemy's recommended practices, I interact with the session by using it as a context manager. How can I test what a function called inside the context manager using that context manager has done?
EDIT: I realized the file structure mattered due to the complexity in introduced. I re-structured the code below to more closely mirror what the end file structure will be like, and what a common production repo in my environment would look like, with code being defined in one file and tests in a completely separate file.
For example:
Code File (delete_things_from_table.py):
from db_handler import delete, SomeTable
def delete_stuff(handler):
stmt = delete(SomeTable)
with handler.Session.begin() as session:
session.execute(stmt)
session.commit()
Test File:
import pytest
import delete_things_from_table as dlt
from db_handler import Handler
def test_delete_stuff():
handler = db_handler()
dlt.delete_stuff(handler):
# Test that session.execute was called
# Test the value of 'stmt'
# Test that session.commit was called
I am not looking for a solution specific to SQLAlchemy; I am only utilizing this to highlight what I want to test within a context manager, and any strategies for testing context managers are welcome.
After sleeping on it, I came up with a solution. I'd love additional/less complex solutions if there are any available, but this works:
import pytest
import delete_things_from_table as dlt
from db_handler import Handler
class MockSession:
def __init__(self):
self.execute_params = []
self.commit_called = False
def execute(self, *args, **kwargs):
self.execute_params.append(["call", args, kwargs])
return self
def commit(self):
self.commit_called = True
return self
def begin(self):
return self
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
pass
def test_delete_stuff(monkeypatch):
handler = db_handler()
# Parens in 'MockSession' below are Important, pass an instance not the class
monkeypatch.setattr(handler, Session, MockSession())
dlt.delete_stuff(handler):
# Test that session.execute was called
assert len(handler.Session.execute_params)
# Test the value of 'stmt'
assert str(handler.Session.execute_params[0][1][0]) == "DELETE FROM some_table"
# Test that session.commit was called
assert handler.Session.commit_called
Some key things to note:
I created a static mock instead of a MagicMock as it's easier to control the methods/data flow with a custom mock class
Since the SQLAlchemy session context manager requires a begin() to start the context, my mock class needed a begin. Returning self in begin allows us to test the values later.
context managers rely on on the magic methods __enter__ and __exit__ with the argument signatures you see above.
The mocked class contains mocked methods which alter instance variables allowing us to test later
This relies on monkeypatch (there are other ways I'm sure), but what's important to note is that when you pass your mock class you want to patch in an instance of the class and not the class itself. The parentheses make a world of difference.
I don't think it's an elegant solution, but it's working. I'll happily take any suggestions for improvement.
How do I test the following code with unittest.mock:
def testme(filepath):
with open(filepath) as f:
return f.read()
Python 3
Patch builtins.open and use mock_open, which is part of the mock framework. patch used as a context manager returns the object used to replace the patched one:
from unittest.mock import patch, mock_open
with patch("builtins.open", mock_open(read_data="data")) as mock_file:
assert open("path/to/open").read() == "data"
mock_file.assert_called_with("path/to/open")
If you want to use patch as a decorator, using mock_open()'s result as the new= argument to patch can be a little bit weird. Instead, use patch's new_callable= argument and remember that every extra argument that patch doesn't use will be passed to the new_callable function, as described in the patch documentation:
patch() takes arbitrary keyword arguments. These will be passed to the Mock (or new_callable) on construction.
#patch("builtins.open", new_callable=mock_open, read_data="data")
def test_patch(mock_file):
assert open("path/to/open").read() == "data"
mock_file.assert_called_with("path/to/open")
Remember that in this case patch will pass the mocked object as an argument to your test function.
Python 2
You need to patch __builtin__.open instead of builtins.open and mock is not part of unittest, you need to pip install and import it separately:
from mock import patch, mock_open
with patch("__builtin__.open", mock_open(read_data="data")) as mock_file:
assert open("path/to/open").read() == "data"
mock_file.assert_called_with("path/to/open")
The way to do this has changed in mock 0.7.0 which finally supports mocking the python protocol methods (magic methods), particularly using the MagicMock:
http://www.voidspace.org.uk/python/mock/magicmock.html
An example of mocking open as a context manager (from the examples page in the mock documentation):
>>> open_name = '%s.open' % __name__
>>> with patch(open_name, create=True) as mock_open:
... mock_open.return_value = MagicMock(spec=file)
...
... with open('/some/path', 'w') as f:
... f.write('something')
...
<mock.Mock object at 0x...>
>>> file_handle = mock_open.return_value.__enter__.return_value
>>> file_handle.write.assert_called_with('something')
With the latest versions of mock, you can use the really useful mock_open helper:
mock_open(mock=None, read_data=None)
A helper function to create a
mock to replace the use of open. It works for open called directly or
used as a context manager.
The mock argument is the mock object to configure. If None (the
default) then a MagicMock will be created for you, with the API
limited to methods or attributes available on standard file handles.
read_data is a string for the read method of the file handle to
return. This is an empty string by default.
>>> from mock import mock_open, patch
>>> m = mock_open()
>>> with patch('{}.open'.format(__name__), m, create=True):
... with open('foo', 'w') as h:
... h.write('some stuff')
>>> m.assert_called_once_with('foo', 'w')
>>> handle = m()
>>> handle.write.assert_called_once_with('some stuff')
To use mock_open for a simple file read() (the original mock_open snippet already given on this page is geared more for write):
my_text = "some text to return when read() is called on the file object"
mocked_open_function = mock.mock_open(read_data=my_text)
with mock.patch("__builtin__.open", mocked_open_function):
with open("any_string") as f:
print f.read()
Note as per docs for mock_open, this is specifically for read(), so won't work with common patterns like for line in f, for example.
Uses python 2.6.6 / mock 1.0.1
The top answer is useful but I expanded on it a bit.
If you want to set the value of your file object (the f in as f) based on the arguments passed to open() here's one way to do it:
def save_arg_return_data(*args, **kwargs):
mm = MagicMock(spec=file)
mm.__enter__.return_value = do_something_with_data(*args, **kwargs)
return mm
m = MagicMock()
m.side_effect = save_arg_return_array_of_data
# if your open() call is in the file mymodule.animals
# use mymodule.animals as name_of_called_file
open_name = '%s.open' % name_of_called_file
with patch(open_name, m, create=True):
#do testing here
Basically, open() will return an object and with will call __enter__() on that object.
To mock properly, we must mock open() to return a mock object. That mock object should then mock the __enter__() call on it (MagicMock will do this for us) to return the mock data/file object we want (hence mm.__enter__.return_value). Doing this with 2 mocks the way above allows us to capture the arguments passed to open() and pass them to our do_something_with_data method.
I passed an entire mock file as a string to open() and my do_something_with_data looked like this:
def do_something_with_data(*args, **kwargs):
return args[0].split("\n")
This transforms the string into a list so you can do the following as you would with a normal file:
for line in file:
#do action
I might be a bit late to the game, but this worked for me when calling open in another module without having to create a new file.
test.py
import unittest
from mock import Mock, patch, mock_open
from MyObj import MyObj
class TestObj(unittest.TestCase):
open_ = mock_open()
with patch.object(__builtin__, "open", open_):
ref = MyObj()
ref.save("myfile.txt")
assert open_.call_args_list == [call("myfile.txt", "wb")]
MyObj.py
class MyObj(object):
def save(self, filename):
with open(filename, "wb") as f:
f.write("sample text")
By patching the open function inside the __builtin__ module to my mock_open(), I can mock writing to a file without creating one.
Note: If you are using a module that uses cython, or your program depends on cython in any way, you will need to import cython's __builtin__ module by including import __builtin__ at the top of your file. You will not be able to mock the universal __builtin__ if you are using cython.
If you don't need any file further, you can decorate the test method:
#patch('builtins.open', mock_open(read_data="data"))
def test_testme():
result = testeme()
assert result == "data"
To patch the built-in open() function with unittest:
This worked for a patch to read a json config.
class ObjectUnderTest:
def __init__(self, filename: str):
with open(filename, 'r') as f:
dict_content = json.load(f)
The mocked object is the io.TextIOWrapper object returned by the open() function
#patch("<src.where.object.is.used>.open",
return_value=io.TextIOWrapper(io.BufferedReader(io.BytesIO(b'{"test_key": "test_value"}'))))
def test_object_function_under_test(self, mocker):
I'm using pytest in my case, and the good news is that in Python 3 the unittest library can also be imported and used without issue.
Here is my approach. First, I create a conftest.py file with reusable pytest fixture(s):
from functools import cache
from unittest.mock import MagicMock, mock_open
import pytest
from pytest_mock import MockerFixture
class FileMock(MagicMock):
def __init__(self, mocker: MagicMock = None, **kwargs):
super().__init__(**kwargs)
if mocker:
self.__dict__ = mocker.__dict__
# configure mock object to replace the use of open(...)
# note: this is useful in scenarios where data is written out
_ = mock_open(mock=self)
#property
def read_data(self):
return self.side_effect
#read_data.setter
def read_data(self, mock_data: str):
"""set mock data to be returned when `open(...).read()` is called."""
self.side_effect = mock_open(read_data=mock_data)
#property
#cache
def write_calls(self):
"""a list of calls made to `open().write(...)`"""
handle = self.return_value
write: MagicMock = handle.write
return write.call_args_list
#property
def write_lines(self) -> str:
"""a list of written lines (as a string)"""
return ''.join([c[0][0] for c in self.write_calls])
#pytest.fixture
def mock_file_open(mocker: MockerFixture) -> FileMock:
return FileMock(mocker.patch('builtins.open'))
Where I decided to make the read_data as a property, in order to be more pythonic. It can be assigned in a test function with whatever data that open() needs to return.
In my test file, named something like test_it_works.py, I have a following test case to confirm intended functionality:
from unittest.mock import call
def test_mock_file_open_and_read(mock_file_open):
mock_file_open.read_data = 'hello\nworld!'
with open('/my/file/here', 'r') as in_file:
assert in_file.readlines() == ['hello\n', 'world!']
mock_file_open.assert_called_with('/my/file/here', 'r')
def test_mock_file_open_and_write(mock_file_open):
with open('/out/file/here', 'w') as f:
f.write('hello\n')
f.write('world!\n')
f.write('--> testing 123 :-)')
mock_file_open.assert_called_with('/out/file/here', 'w')
assert call('world!\n') in mock_file_open.write_calls
assert mock_file_open.write_lines == """\
hello
world!
--> testing 123 :-)
""".rstrip()
Check out the gist here.
Sourced from a github snippet to patch read and write functionality in python.
The source link is over here
import configparser
import pytest
simpleconfig = """[section]\nkey = value\n\n"""
def test_monkeypatch_open_read(mockopen):
filename = 'somefile.txt'
mockopen.write(filename, simpleconfig)
parser = configparser.ConfigParser()
parser.read(filename)
assert parser.sections() == ['section']
def test_monkeypatch_open_write(mockopen):
parser = configparser.ConfigParser()
parser.add_section('section')
parser.set('section', 'key', 'value')
filename = 'somefile.txt'
parser.write(open(filename, 'wb'))
assert mockopen.read(filename) == simpleconfig
SIMPLE #patch with assert
If you're wanting to use #patch. The open() is called inside the handler and is read.
#patch("builtins.open", new_callable=mock_open, read_data="data")
def test_lambda_handler(self, mock_open_file):
lambda_handler(event, {})
The following code uses a context manager to store and load variables into files.
However, it's very annoying to have to set the value property of what is yielded by the context manager (loaded.value).
I would like to
Not have to define a new class like LoadedValue
Set the yielded value of the context manager (loaded) to be the value that is saved.
Solving either of these issues would be appreciated.
import os
import pickle
from contextlib import contextmanager
class LoadedValue:
def __init__(self, value):
self.value = value
def __str__(self):
return "<LoadedValue: {}>".format(self.value)
#contextmanager
def load_manager(load_file="file.pkl"):
with open(load_file, "rb") as f:
loaded_object = LoadedValue(pickle.load(f))
try:
yield loaded_object
finally:
with open(load_file, "wb+") as f:
pickle.dump(loaded_object.value, f)
if __name__ == "__main__":
filename = "test.pkl"
with open(filename, "wb+") as f:
pickle.dump(7, f)
with load_manager(filename) as loaded:
print(loaded) # >>> <LoadedValue: 7>
loaded.value = 5 # this is what I have to do
# loaded = 5 # this is what I want to do
with load_manager(filename) as loaded:
print(loaded) # >>> <LoadedValue: 5>
Note: This was originally posted on CodeReview, but I have decided to repost it here in order to get answers, and leave it on CodeReview to help improve the code in other ways.
no there is no way to override the assignment operator in python, so you cannot do loaded = 5
(you could override other things so it might sort of work)
override __call__ to allow loaded(5)
override __lshift__ to allow loaded << 5
override __ior__ to allow loaded |= 5
(however be forwarned your coworkers, or future co workers may never forgive you)
also, things that occure within the scope of the with XXXX as Y: block are not (typically) accessible to the method that is yielding the context, unless that scope was previously accessible to the place yielding the scope (ie the global namespace etc)
Below is an example of my my_create method, and an example of that method in use.
#contextmanager
def my_create(**attributes):
obj = MyObject(**attributes)
yield obj
obj.save()
with my_create(a=10) as new_obj:
new_obj.b = 7
new_obj.a # => 10
new_obj.b # => 7
new_obj.is_saved() # => True
To users of Ruby/Rails, this may look familiar. It's similar to the ActiveRecord::create method, with the code inside the with block acting as, well, a block.
However:
with my_create(a=10) as new_obj:
pass
new_obj.a # => 10
new_obj.is_saved() # => True
In the above example, I've passed an empty "block" to my my_create function. Things work as expected (my_obj was initialized, and saved), but the formatting looks a little wonky, and the with block seems unnecessary.
I would prefer to be able to call my_create directly, without having to setup a passing with block. Unfortunately, that's not possible with my current implementation of my_create.
my_obj = create(a=10)
my_obj # => <contextlib.GeneratorContextManager at 0x107c21050>
I'd have to call both __enter__ and __exit__ on the GeneratorContextManager to get my desired result.
The question:
Is there a way to write my my_create function so that it can be called with a "block" as an optional "parameter"? I don't want to pass an optional function to my_create. I want my_create to optionally yield execution to a block of code.
The solution doesn't have to involve with or contextmanager. For instance, the same results as above can be achieved with a generator and a for loop, although the syntax becomes even more unclear.
At this point I'm afraid that a readable-enough-to-be-sensibly-usable solution doesn't exist, but I'm still interested to see what everyone comes up with.
Some clarification:
Another example would be:
#contextmanager
def header_file(path):
touch(path)
f = open(path, 'w')
f.write('This is the header')
yield f
f.close()
with header_file('some/path') as f:
f.write('some more stuff')
another_f = header_file('some/other/path')
I always want to do the __enter__ and __exit__ parts of the context manager. I don't always want to supply a block. I don't want to have to set up a passing with block if I don't have to.
This is possible and easy in Ruby. It would be cool if it were possible in Python too, since we're already so close (we just have to set up a passing with block). I understand that the language mechanics make it a difficult (technically impossible?) but a close-enough solution is interesting to me.
Add a new method on MyObject which creates and saves.
class MyObject:
#classmethod
def create(cls, **attributes):
obj = cls(**attributes)
obj.save()
return obj
This is an alternate initializer, a factory, and the design pattern has precedent in Python standard libraries and in many popular frameworks. Django models use this pattern where an alternate initializer Model.create(**args) can offer additional features that the usual Model(**args) would not (e.g. persisting to the database).
Is there a way to write my my_create function so that it can be called with a "block" as an optional "parameter"?
No.
I'd suggest using different functions to get a context manager that saves an object on __exit__ and to get an automatically saved object. There's no easy way to have one function do both things. (There are no "blocks" that you can pass around, other than functions, which you say you don't want.)
For instance, you could create a second function that just creates and immediately saves an object without running any extra code to run in between:
def create_and_save(**args):
obj = MyObject(**args)
obj.save()
return obj
So you could make it work with two functions. But a more Pythonic approach would probably be to get rid of the context manager function and make the MyObject class serve as its own context manager. You can give it very simple __enter__ and __exit__ methods:
def __enter__(self):
return self
def __exit__(self, exception_type, exception_value, traceback):
if exception_type is None:
self.save()
Your first example would become:
with MyObject(a=10) as new_obj:
new_obj.b = 7
You could also turn the create_and_save function I showed above into a classmethod:
#classmethod
def create_and_save(cls, **args):
obj = cls(**args)
obj.save()
return obj
Your second example would then be:
new_obj = MyObject.create_and_save(a=10)
Both of those methods could be written in a base class and simply inherited by other classes, so don't think you'd need to rewrite them all the time.
Ok, there seems to be some confusion so I've been forced to come up with an example solution. Here's the best I've been able to come up with so far.
class my_create(object):
def __new__(cls, **attributes):
with cls.block(**attributes) as obj:
pass
return obj
#classmethod
#contextmanager
def block(cls, **attributes):
obj = MyClass(**attributes)
yield obj
obj.save()
If we design my_create like above, we can use it normally without a block:
new_obj = my_create(a=10)
new_obj.a # => 10
new_obj.is_saved() # => True
And we can call it slightly differently with a block.
with my_create.block(a=10) as new_obj:
new_obj.b = 7
new_obj.a # => 10
new_obj.b # => 7
new_obj.saved # => True
Calling my_create.block is kind of similar to calling Celery tasks Task.s, and users who don't want to call my_create with a block just call it normally, so I'll allow it.
However, this implementation of my_create looks wonky, so we can create a wrapper to make it more like the implementation of context_manager(my_create) in the question.
import types
# The abstract base class for a block accepting "function"
class BlockAcceptor(object):
def __new__(cls, *args, **kwargs):
with cls.block(*args, **kwargs) as yielded_value:
pass
return yielded_value
#classmethod
#contextmanager
def block(cls, *args, **kwargs):
raise NotImplementedError
# The wrapper
def block_acceptor(f):
block_accepting_f = type(f.func_name, (BlockAcceptor,), {})
f.func_name = 'block'
block_accepting_f.block = types.MethodType(contextmanager(f), block_accepting_f)
return block_accepting_f
Then my_create becomes:
#block_acceptor
def my_create(cls, **attributes):
obj = MyClass(**attributes)
yield obj
obj.save()
In use:
# creating with a block
with my_create.block(a=10) as new_obj:
new_obj.b = 7
new_obj.a # => 10
new_obj.b # => 7
new_obj.saved # => True
# creating without a block
new_obj = my_create(a=10)
new_obj.a # => 10
new_obj.saved # => True
Ideally the my_create function wouldn't need to accept a cls, and the block_acceptor wrapper would handle that, but I haven't got time to make those changes just now.
pythonic? no. useful? possibly?
I'm still interested to see what others come up with.
With a slight change, you can can really close to what you want, just not via implementation using contextlib.contextmanager:
creator = build_creator_obj()
# "with" contextmanager interface
with creator as obj:
obj.attr = 'value'
# "call" interface
obj = creator(attr='value')
Where creator is an object that implements __enter__ and __exit__ for the first usage and implements __call__ for the second usage.
You can also hide the construction of creator inside a property on some persistent object, e.g.:
class MyDatabase():
#property
def create(self):
return build_creator_obj()
db = MyDatabase()
# so that you can do either/both:
with db.create as obj:
obj.attr = 'value'
obj = db.create(attr='value')
I've got two files:
REF_FILE : it's a file with changing data
TEST_FILE: it's a file with fixed data (It's simply a REF_FILE at a given moment)
Now I want to test this function:
def get_info_from_extract(mpm):
fid = open(REF_FILE)
all_infos = json.load(fid)
fid.close()
for m in all_infos:
if m['mpm_id'] == mpm:
break
return m
class Test_info_magento(unittest.TestCase):
def test_should_have_value(self):
# GIVEN
mpm=107
expected_value = 1.345
# WHEN
#MOCK OPEN FUNCTION TO READ TEST_FILE
m = file_info.get_info_from_extract(mpm)
# THEN
self.assertEqual(m['value'], expected_value)
The problem is the 'REF_FILE' is changing often so I can't properly test it. So i need to use 'TEST_FILE' and for that purpose I need to mock my open function. I can't find out how to mock it and I would like some help to figure out how to properly mock it in order to make it return my 'TEST_FILE'
I would recommend rewriting the function so it accepts file-like object (it would be easier to test and maintain).
However if you can not, try this context-manager:
class MockOpen(object):
def __call__(self, *args, **kwargs):
#print('mocked')
return self.__open(TEST_FILE) #it would be better to return a file-like object instead
def __enter__(self):
global open
self.__open = open
open = self
def __exit__(self, exception_type, exception_value, traceback):
global open
open = self.__open
with MockOpen():
# here you run your test
...
The context manager replaces (within with statement block) built-in function referenced by global label open with itself. Every call to open() in the body of the with block is a call of the __call__() method, which ignores all its arguments and returns opened TEST_FILE.
It is not the best possible implementation, since:
it uses actual file, slowing your tests - a file-like object should be returned instead,
it is not configurable - a file name (or content) should be given to its constructor.