I have a database handler that utilizes SQLAlchemy ORM to communicate with a database. As part of SQLAlchemy's recommended practices, I interact with the session by using it as a context manager. How can I test what a function called inside the context manager using that context manager has done?
EDIT: I realized the file structure mattered due to the complexity in introduced. I re-structured the code below to more closely mirror what the end file structure will be like, and what a common production repo in my environment would look like, with code being defined in one file and tests in a completely separate file.
For example:
Code File (delete_things_from_table.py):
from db_handler import delete, SomeTable
def delete_stuff(handler):
stmt = delete(SomeTable)
with handler.Session.begin() as session:
session.execute(stmt)
session.commit()
Test File:
import pytest
import delete_things_from_table as dlt
from db_handler import Handler
def test_delete_stuff():
handler = db_handler()
dlt.delete_stuff(handler):
# Test that session.execute was called
# Test the value of 'stmt'
# Test that session.commit was called
I am not looking for a solution specific to SQLAlchemy; I am only utilizing this to highlight what I want to test within a context manager, and any strategies for testing context managers are welcome.
After sleeping on it, I came up with a solution. I'd love additional/less complex solutions if there are any available, but this works:
import pytest
import delete_things_from_table as dlt
from db_handler import Handler
class MockSession:
def __init__(self):
self.execute_params = []
self.commit_called = False
def execute(self, *args, **kwargs):
self.execute_params.append(["call", args, kwargs])
return self
def commit(self):
self.commit_called = True
return self
def begin(self):
return self
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
pass
def test_delete_stuff(monkeypatch):
handler = db_handler()
# Parens in 'MockSession' below are Important, pass an instance not the class
monkeypatch.setattr(handler, Session, MockSession())
dlt.delete_stuff(handler):
# Test that session.execute was called
assert len(handler.Session.execute_params)
# Test the value of 'stmt'
assert str(handler.Session.execute_params[0][1][0]) == "DELETE FROM some_table"
# Test that session.commit was called
assert handler.Session.commit_called
Some key things to note:
I created a static mock instead of a MagicMock as it's easier to control the methods/data flow with a custom mock class
Since the SQLAlchemy session context manager requires a begin() to start the context, my mock class needed a begin. Returning self in begin allows us to test the values later.
context managers rely on on the magic methods __enter__ and __exit__ with the argument signatures you see above.
The mocked class contains mocked methods which alter instance variables allowing us to test later
This relies on monkeypatch (there are other ways I'm sure), but what's important to note is that when you pass your mock class you want to patch in an instance of the class and not the class itself. The parentheses make a world of difference.
I don't think it's an elegant solution, but it's working. I'll happily take any suggestions for improvement.
Related
I'm working on unit tests for a service I made that uses confluent-kafka. The goal is to test successful function calls, exception errors, etc. The problem I'm running into is since I'm instantiating the client in the constructor of my service the tests are failing since I'm unsure how to patch a constructor. My question is how do I mock my service in order to properly test its functionality.
Example_Service.py:
from confluent_kafka.schema_registry import SchemaRegistryClient
class ExampleService:
def __init__(self, config):
self.service = SchemaRegistryClient(config)
def get_schema(self):
return self.service.get_schema()
Example_Service_tests.py
from unittest import mock
#mock.patch.object(SchemaRegistryClient, "get_schema")
def test_get_schema_success(mock_client):
schema_Id = ExampleService.get_schema()
mock_service.assert_called()
The problem is that you aren't creating an instance of ExampleService; __init__ never gets called.
You can avoid patching anything by allowing your class to accept a client maker as an argument (which can default to SchemaRegistryClient:
class ExampleService:
def __init__(self, config, *, client_factory=SchemaRegistryClient):
self.service = client_factory(config)
...
Then in your test, you can simply pass an appropriate stub as an argument:
def test_get_schema_success():
mock_client = Mock()
schema_Id = ExampleService(some_config, client_factory=mock_client)
mock_client.assert_called()
Two ways
mock entire class using #mock.patch(SchemaRegistryClient) OR
replace #mock.patch.object(SchemaRegistryClient, "get_schema") with
#mock.patch.object(SchemaRegistryClient, "__init__")
#mock.patch.object(SchemaRegistryClient, "get_schema")
So what I would like to achieve is mocking functions in various modules automatically with pytest. So I defined this in my conftest.py:
import sys
import __builtin__
from itertools import chain
# Fixture factory magic START
NORMAL_MOCKS = [
"logger", "error", "logging", "base_error", "partial"]
BUILTIN_MOCKS = ["exit"]
def _mock_factory(name, builtin):
def _mock(monkeypatch, request):
module = __builtin__ if builtin else request.node.module.MODULE
ret = Mock()
monkeypatch.setattr(module, name, ret)
return ret
return _mock
iterable = chain(
((el, False) for el in NORMAL_MOCKS),
((el, True) for el in BUILTIN_MOCKS))
for name, builtin in iterable:
fname = "mock_{name}".format(name=name)
_tmp_fn = pytest.fixture(name=fname)(_mock_factory(name, builtin))
_tmp_fn.__name__ = fname
setattr(
sys.modules[__name__],
"mock_{name}".format(name=name), _tmp_fn)
# Fixture normal factory magic END
This works and all, but I would like to omit the usage of the NORMAL_MOCKS and BUILTIN_MOCKS lists. So basically in a pytest hook I should be able to see that say there is a mock_foo fixture, but it's not registered yet, so I create a mock for it with the factory and register it. I just couldn't figure out how to do this. Basically I was looking into the pytest_runtest_setup function, but could not figure out how to do the actual fixture registration. So basically I would like to know with which hook/call can I register new fixture functions programatically from this hook.
One of the ways is to parameterize the tests at the collection/generation stage, i.e. before the test execution begins: https://docs.pytest.org/en/latest/example/parametrize.html
# conftest.py
import pytest
def mock_factory(name):
return name
def pytest_generate_tests(metafunc):
for name in metafunc.fixturenames:
if name.startswith('mock_'):
metafunc.parametrize(name, [mock_factory(name[5:])])
# test_me.py
def test_me(request, mock_it):
print(mock_it)
A very simple solution. But the downside is that the test is reported as parametrized when it actually is not:
$ pytest -s -v -ra
====== test session starts ======
test_me.py::test_me[it] PASSED
====== 1 passed in 0.01 seconds ======
To fully simulate the function args without the parametrization, you can make a less obvious trick:
# conftest.py
import pytest
def mock_factory(name):
return name
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_protocol(item, nextitem):
for name in item.fixturenames:
if name.startswith('mock_') and name not in item.funcargs:
item.funcargs[name] = mock_factory(name[5:])
yield
The pytest_runtest_setup hook is also a good place for this, as long as I've just tried.
Note that you do not register the fixture in that case. It is too late for the fixture registration, as all the fixtures are gathered and prepared much earlier at the collection/parametrization stages. In this stage, you can only execute the tests and provide the values. It is your responsibility to calculate the fixture values and to destroy them afterward.
The snippet below is a pragmatic solution to "how to dynamically add fixtures".
Disclaimer: I don't have expertise on pytest. I'm not saying this is what pytest was designed for, I just looked at the source code and came up with this and it seems to work. The fact that I use "private" attributes means it might not work with all versions (currently I'm on pytest 7.1.3)
from _pytest.fixtures import FixtureDef
from _pytest.fixtures import SubRequest
import pytest
#pytest.fixture(autouse=True) # autouse is relevant, as then the fixture registration happens in-time. It's too late if requiring the fixture without autouse e.g. like `#pytest.mark.usefixtures("add_fixture_dynamically")`
def add_fixture_dynamically(request: SubRequest):
"""
Conditionally and dynamically adds another fixture. It's conditional on the presence of:
#pytest.mark.my_mark()
"""
marker = request.node.get_closest_marker("my_mark")
# don't register fixture if marker is not present:
if marker is None:
return
def your_fixture(): # the name of the fixture must match the parameter name, like other fixtures
return "hello"
# register the fixture just-in-time
request._fixturemanager._arg2fixturedefs[your_fixture.__name__] = [
FixtureDef(
argname=your_fixture.__name__,
func=your_fixture,
scope="function",
fixturemanager=request._fixturemanager,
baseid=None,
params=None,
),
]
yield # runs the test. Could be wrapped in try/except/finally
# suppress warning (works if this and `add_fixture_dynamically` are in `conftest.py`)
def pytest_configure(config):
"""Prevents printing of the warning 'PytestUnknownMarkWarning: Unknown pytest.mark.<fixture_name>'"""
config.addinivalue_line("markers", "my_mark")
#pytest.mark.my_mark()
def test_adding_fixture_dynamically(your_fixture):
assert your_fixture == "hello"
I'd like to test a method, whether it calls a specific method of a temporary internal object or not. (ConfigParser.read)
So the object is created inside, and it's not accessible from the outside after the method exits.
Using python 2.7
In foobar.py
import ConfigParser
class FooBar:
def method(self, filename):
config=ConfigParser.ConfigParser()
config.read(filename)
do_some_stuff()
I'd like to test whether config.read was called.
As I understand, the patch decorator was made for this, but unfortunately the MagicMock object the testcase receives is not the same that is created inside, and I can't get near the object that lives inside the method.
I tried like this:
class TestFooBar(TestCase):
def setUp(self):
self.myfoobar = FooBar()
#mock.patch('foobar.ConfigParser')
def test_read(self,mock_foobar):
self.myfoobar.method("configuration.ini")
assert mock_foobar.called # THIS IS OKAY
assert mock_foobar.read.called # THIS FAILS
mock_foobar.read.assert_called_with("configuration.ini") # FAILS TOO
The problem is:
- mock_foobar is created before the self.myfoobar.method creates the ConfigReader inside.
- when debugging mock_foobar has internal data about the previous calls, but no "read" property (the inner MagicMock for mocking the read method)
Of course one way out is refactoring and giving the .read() or the init() a ConfigReader object, but it's not always possible to change the code, and I'd like to grasp the internal objects of the method without touching the module under test.
You're so close! The issue is that you are mocking the class, but then your test checks that read() is called on that mock class - but you actually expect read() to be called on the instance that is returned when you call the class. The following works - I find the second test more readable than the first, but they both work:
import ConfigParser
from unittest import TestCase
from mock import create_autospec, patch, Mock
class FooBar(object):
def method(self, filename):
config=ConfigParser.ConfigParser()
config.read(filename)
class TestFooBar(TestCase):
def setUp(self):
self.myfoobar = FooBar()
#patch('ConfigParser.ConfigParser')
def test_method(self, config_parser_class_mock):
config_parser_mock = config_parser_class_mock.return_value
self.myfoobar.method("configuration.ini")
config_parser_class_mock.assert_called_once_with()
config_parser_mock.read.assert_called_once_with("configuration.ini")
def test_method_better(self):
config_parser_mock = create_autospec(ConfigParser.ConfigParser, instance=True)
config_parser_class_mock = Mock(return_value=config_parser_mock)
with patch('ConfigParser.ConfigParser', config_parser_class_mock):
self.myfoobar.method("configuration.ini")
config_parser_class_mock.assert_called_once_with()
config_parser_mock.read.assert_called_once_with("configuration.ini")
I've been using testbed, webtest, and nose to test my Python GAE app, and it is a great setup. I'm now implementing something similar to Nick's great example of using the deferred library, but I can't figure out a good way to test the parts of the code triggered by DeadlineExceededError.
Since this is in the context of a taskqueue, it would be painful to construct a test that took more than 10 minutes to run. Is there a way to temporarily set the taskqueue time limit to a few seconds for the purpose of testing? Or perhaps some other way to elegantly test the execution of code in the except DeadlineExceededError block?
Abstract the "GAE context" for your code. in production provide real "GAE implementation" for testing provide a mock own that will raise the DeadlineExceededError. The test should not depend on any timeout, should be fast.
Sample abstraction (just glue):
class AbstractGAETaskContext(object):
def task_spired(): pass # this will throw exception in mock impl
# here you define any method that you call into GAE, to be mocked
def defered(...): pass
If you don't like abstraction, you can do monkey patching for testing only, also you need to define the task_expired function to be your hook for testing.
task_expired should be called during your task implementation function.
*UPDATED*This the 3rd solution:
First I want to mention that the Nick's sample implementation is not so great, the Mapper class has to many responsabilities(deferring, query data, update in batch); and this make the test hard to made, a lot of mocks need to be defined. So I extract the deferring responsabilities in a separate class. You only want to test that deferring mechanism, what actually is happen(the update, query, etc) should be handled in other test.
Here is deffering class, also this no more depends on GAE:
class DeferredCall(object):
def __init__(self, deferred):
self.deferred = deferred
def run(self, long_execution_call, context, *args, **kwargs):
''' long_execution_call should return a tuple that tell us how was terminate operation, with timeout and the context where was abandoned '''
next_context, timeouted = long_execution_call(context, *args, **kwargs)
if timeouted:
self.deferred(self.run, next_context, *args, **kwargs)
Here is the test module:
class Test(unittest.TestCase):
def test_defer(self):
calls = []
def mock_deferrer(callback, *args, **kwargs):
calls.append((callback, args, kwargs))
def interrupted(self, context):
return "new_context", True
d = DeferredCall()
d.run(interrupted, "init_context")
self.assertEquals(1, len(calls), 'a deferred call should be')
def test_no_defer(self):
calls = []
def mock_deferrer(callback, *args, **kwargs):
calls.append((callback, args, kwargs))
def completed(self, context):
return None, False
d = DeferredCall()
d.run(completed, "init_context")
self.assertEquals(0, len(calls), 'no deferred call should be')
How will look the Nick's Mapper implementation:
class Mapper:
...
def _continue(self, start_key, batch_size):
... # here is same code, nothing was changed
except DeadlineExceededError:
# Write any unfinished updates to the datastore.
self._batch_write()
# Queue a new task to pick up where we left off.
##deferred.defer(self._continue, start_key, batch_size)
return start_key, True ## make compatible with DeferredCall
self.finish()
return None, False ## make it comaptible with DeferredCall
runner = _continue
Code where you register the long running task; this only depend on the GAE deferred lib.
import DeferredCall
import PersonMapper # this inherits the Mapper
from google.appengine.ext import deferred
mapper = PersonMapper()
DeferredCall(deferred).run(mapper.run)
Background
So let's say I'm making app for GAE, and I want to use API Hooks.
BIG EDIT: In the original version of this question, I described my use case, but some folks correctly pointed out that it was not really suited for API Hooks. Granted! Consider me helped. But now my issue is academic: I still don't know how to use hooks in practice, and I'd like to. I've rewritten my question to make it much more generic.
Code
So I make a model like this:
class Model(db.Model):
user = db.UserProperty(required=True)
def pre_put(self):
# Sets a value, raises an exception, whatever. Use your imagination
And then I create a db_hooks.py:
from google.appengine.api import apiproxy_stub_map
def patch_appengine():
def hook(service, call, request, response):
assert service == 'datastore_v3'
if call == 'Put':
for entity in request.entity_list():
entity.pre_put()
apiproxy_stub_map.apiproxy.GetPreCallHooks().Append('preput',
hook,
'datastore_v3')
Being TDD-addled, I'm making all this using GAEUnit, so in gaeunit.py, just above the main method, I add:
import db_hooks
db_hooks.patch_appengine()
And then I write a test that instantiates and puts a Model.
Question
While patch_appengine() is definitely being called, the hook never is. What am I missing? How do I make the pre_put function actually get called?
Hooks are a little low level for the task at hand. What you probably want is a custom property class. DerivedProperty, from aetycoon, is just the ticket.
Bear in mind, however, that the 'nickname' field of the user object is probably not what you want - per the docs, it's simply the user part of the email field if they're using a gmail account, otherwise it's their full email address. You probably want to let users set their own nicknames, instead.
The issue here is that within the context of the hook() function an entity is not an instance of db.Model as you are expecting.
In this context entity is the protocol buffer class confusingly referred to as entity (entity_pb). Think of it like a JSON representation of your real entity, all the data is there, and you could build a new instance from it, but there is no reference to your memory-resident instance that is waiting for it's callback.
Monkey patching all of the various put/delete methods is the best way to setup Model-level callbacks as far as I know†
Since there doesn't seem to be that many resources on how to do this safely with the newer async calls, here's a BaseModel that implements before_put, after_put, before_delete & after_delete hooks:
class HookedModel(db.Model):
def before_put(self):
logging.error("before put")
def after_put(self):
logging.error("after put")
def before_delete(self):
logging.error("before delete")
def after_delete(self):
logging.error("after delete")
def put(self):
return self.put_async().get_result()
def delete(self):
return self.delete_async().get_result()
def put_async(self):
return db.put_async(self)
def delete_async(self):
return db.delete_async(self)
Inherit your model-classes from HookedModel and override the before_xxx,after_xxx methods as required.
Place the following code somewhere that will get loaded globally in your applicaiton (like main.py if you use a pretty standard looking layout). This is the part that calls our hooks:
def normalize_entities(entities):
if not isinstance(entities, (list, tuple)):
entities = (entities,)
return [e for e in entities if hasattr(e, 'before_put')]
# monkeypatch put_async to call entity.before_put
db_put_async = db.put_async
def db_put_async_hooked(entities, **kwargs):
ents = normalize_entities(entities)
for entity in ents:
entity.before_put()
a = db_put_async(entities, **kwargs)
get_result = a.get_result
def get_result_with_callback():
for entity in ents:
entity.after_put()
return get_result()
a.get_result = get_result_with_callback
return a
db.put_async = db_put_async_hooked
# monkeypatch delete_async to call entity.before_delete
db_delete_async = db.delete_async
def db_delete_async_hooked(entities, **kwargs):
ents = normalize_entities(entities)
for entity in ents:
entity.before_delete()
a = db_delete_async(entities, **kwargs)
get_result = a.get_result
def get_result_with_callback():
for entity in ents:
entity.after_delete()
return get_result()
a.get_result = get_result_with_callback
return a
db.delete_async = db_delete_async_hooked
You can save or destroy your instances via model.put() or any of the db.put(), db.put_async() etc, methods and get the desired effect.
†would love to know if there is an even better solution!?
I don't think that Hooks are really going to solve this problem. The Hooks will only run in the context of your AppEngine application, but the user can change their nickname outside of your application using Google Account settings. If they do that, it won't trigger any logic implement in your hooks.
I think that the real solution to your problem is for your application to manage its own nickname that is independent of the one exposed by the Users entity.