Imagine a class like so:
class Foo():
def method_1(self):
bar = Bar()
bazz = Bazz(bar)
return bazz.method_2()
For unit testing, how can we mock the Bar object when we never call any methods on it, we're just passing it as a parameter to the Bazz constructor? (Yes, this is not ideal, but this pattern can be found in a top-level class wiring together different objects via dependency injection).
You do call the Bar object, when you execute: bar = Bar(), so you can easily mock it:
mock_bar = mocker.MagicMock(name='Bar')
mocker.patch('foo.Bar', new=mock_bar)
# mock_foo.return_value = the new mocked object
Related
I have a SomeModel class defined in a third party library which is used like this:
class SystemUnderTest:
def foo():
...
with SomeModel.my_method() as model:
x = ...
model.bar(x)
I would like to test that calling foo() results in bar() being called on that third party model class. I'd ike to know if I can do this using #patch.object()
class MyTestCase(unitest.TestCase):
#patch.object(SomeModel, 'my_method')
def test_my_method_is_called(self, my_mocked_model):
sut = SystemUnderTest()
sut.foo()
# how do I access the return value of my_mocked_model and confirm model.bar() has been called with x as an argument?
my_mocked_model is a mock that is automatically passed in to my test method by virtue of the patch.object decorator. Is there a way of asserting what calls are made to that mock object? How do I do this?
I am trying to mock two methods within a single function call. Below I have the method entry_point which then calls foo, which in-turn calls bar.
I want to ensure that both foo and bar are called, and I also want to inspect their arguments to see what the types and values are of the passed arguments.
foo.py
class MyClass:
def entry_point(self):
self.foo(x=2)
def foo(self, x):
self.bar(y=x*3)
def bar(self, y):
return y**2
test_foo.py
import unittest
from unittest.mock import patch
from foo import MyClass
class TestMyClass(unittest.TestCase):
def test_nested_patch(self):
mc = MyClass()
# I would like to find out what arguments are passed
# in to both foo() and bar()
with patch.object(mc, "foo") as mock1:
with patch.object(mc, "bar") as mock2:
mc.entry_point()
# PASSES
mock1.assert_called_once()
# FAILS HERE: "bar" not called
mock2.assert_called_once()
# ADDITONAL: I want to see the function arguments
_, kwargs1 = mock1.call_args
_, kwargs2 = mock2.call_args
self.assertEqual(kwargs1["x"], 2)
self.assertEqual(kwargs2["y"], 6)
AssertionError: Expected 'bar' to have been called once. Called 0 times.
I have tried it a few different ways, but the above code seemed to be the cleanest way to explain my situation.
I recognize I could get around this by calling mc.entry_point() within two different (non-nested) patch context managers (one for foo, one for bar) but that is not as clean and doesn't really give me full control over the function calls.
I need a way to defer the initialization of a global variable until the firs access to it, the overall idea is expressed in the following Python pseudocode:
FOO = bar
FOO.some_method_on_bar() # Init bar: bar = Bar(); bar.some_method_on_bar()
FOO.some_method_on_bar() # Use cached bar: bar.some_method_on_bar()
So far I'm thinking of somehow telling Python to call a special class method every time its instance is evaluated, but I can't seem to google it up:
class LazyGetter:
def __init__(self, get_value) -> None:
self.get_value = get_value
def __class__instance__access__(self):
return self.get_value()
FOO = LazyGetter(get_value=lambda: Bar())
FOO # = LazyGetter.__class__instance__access__()
FOO.some_method_on_bar() # = LazyGetter.__class__instance__access__().some_method_on_bar()
So, basically I need to know if there's something equivalent to the madeup __class__instance__access__ method.
If you have to defer initialization, you may be doing too much in the __init__ method. But if you don't control that code, then you seem to be needing something like a proxy class, so you can do:
proxied_bar = Proxy(Bar)
...
proxied_bar.some_bar_method() # this would initialize Bar, if it isn't yet initialized, and then call the some_bar_method
One way to do so, see: Python proxy class
In that answer an instantiated object is proxied (rather than the class), so you have to make some modifications if you want to defer the __init__ call.
Since Python 3.7, one can define a module __getattr__ method to programmatically provide "global" attributes. Earlier and more generally, one can define a custom module type to provide such a method.
Assuming that Bar() is needed to initialise the global FOO, the following __getattr__ at module scope can be used.
# can type-annotate to "hint" that FOO will exist at some point
FOO: Bar
# called if module.<item> fails
def __getattr__(item: str):
if item == "FOO":
global FOO # add FOO to global scope
FOO = Bar()
return FOO
raise AttributeError(f"module {__name__!r} has no attribute {item!r}")
This makes FOO available programmatically when accessed as an attribute, i.e. as module.FOO or an import. It is only available in the global scope after the first such access.
If the access to FOO is expected to happen inside the module first, it is easier to provide a "getter" function instead.
def get_FOO() -> Bar:
global _FOO
try:
return _FOO
except NameError:
_FOO = Bar()
return _FOO
You might want to consider just having an actual global variable and accessing it with global <variable> but I can't say if that fits the use-case. It should work fine if you're just looking for some caching logic.
You might be able to do this with metaclasses which is a way of modifying a class when it's instantiated. Whether this is useful depends on what you're trying to achieve.
If you control the class code, you can use __getattribute__ to delay initialization until the first time you access an attribute.
class Bar:
def __init__(self, *args):
self._args = args
def __getattribute__(self, name):
args = super().__getattribute__('_args')
if args is not None:
# Initialize the object here.
self.data = args[0]
self.args = None
return super().__getattribute__(name)
def some_method_on_bar(self):
return self.data
I.e. given a class and an instance like this:
class Foo:
def bar(self):
...
foo = Foo()
is Foo.bar(foo) always equivalent to foo.bar(), or are there corner cases where the former call can lead to unexpected results?
Not necessarily. The descriptor protocol allows you to define what it means for an object to be accessed as an attribute.
Foo.bar(foo) is equivalent to Foo.__dict__['bar'].__get__(None, Foo)
foo.bar() is equivalent to type(foo).__dict__['bar'].__get__(foo, type(foo))
There are two points of divergence. First, type(foo) may or may not be Foo, and second, depending on how __get__ is defined, the results when __get__ receives foo vs. None as its first argument may differ.
The corner case comes into play when Foo.bar() is decorated, e.g. in one of the two following ways:
class Foo:
#classmethod
def bar(cls, inst):
...
#staticmethod
def bar(inst):
...
In the former case, calling Foo.bar() automatically passes the class Foo as an argument - not an instance of the class, but the class itself. If you were to make a subclass Bar, then Bar.bar() would pass the class Bar as an argument. This is useful for polymorphism.
In the latter case, there are no implicit parameters being passed. You still have to call the method through its class (Foo.bar()) but neither the instance on which it is called (if you do that for some reason) nor the class on which it is called are passed as parameters.
In general, though, Foo.bar(foo) is equivalent to foo.bar() - the default behavior of methods in a class is to accept the instance on which the method is called as the first parameter of that method.
When you do foo.bar, it first tries to lookup bar on foo itself. As bar is defined in the class Foo and not the instance foo, it will fail. Only then does it move on and attempts to lookup bar on foo.__class__ (i.e. Foo), and succeeds. We can exploit this:
class Foo:
def bar(self):
...
foo = Foo()
foo.bar = lambda: print('hello')
Now, foo.bar() and Foo.bar(foo) results in distinct actions, as different functions will be resolved in the two cases.
The question is in context of unit testing.
I have created an instance of the class I am testing, and trying to test one of its methods. The method uses data it gets from different class defined in separate module. I am going to mock that module.
How I can access my instance's name space - I have to do it before running the method I am testing, to mock module which contain definition of the class my method is getting data from?
I am going to create an example here which I think parallels what you are trying to do.
Say you have some class that we'll call Data that is defined in the module foo. The foo module imports bar and a method of foo.Data calls bar.get_data() to populate itself.
You want to create a module test that will create an instance of foo.Data, but instead of using the actual module bar you want that instance to use a mocked version of this.
You can set this up by importing foo from your test module, and then rebinding foo.bar to your mocked version of the module.
Here is an example of how this might look:
bar.py:
def get_data():
return 'bar'
foo.py:
import bar
class Data(object):
def __init__(self):
self.val = bar.get_data()
if __name__ == '__main__':
d = Data()
print d.val # prints 'bar'
test.py:
import foo
class bar_mock(object):
#staticmethod
def get_data():
return 'test'
if __name__ == '__main__':
foo.bar = bar_mock
d = foo.Data()
print d.val # prints 'test'
Although this will get you by for a simple test case, you are probably better off looking into a mocking library to handle this for you.