I have encountered a problem when I write a unit test. This is a chunck from an unit test file:
main.obj = MainObj.objects.create(short_url="a1b2c3")
with unittest.mock.patch('prj.apps.app.models.base.generate_url_string', return_value="a1b2c3") as mocked_generate_url_string:
obj.generate_short_url()
This is a chunk of code from the file 'prj.apps.app.models.base' (file which imports function 'generate_url_string' which is being mocked):
from ..utils import generate_url_string
.....................
def generate_short_url(self):
short_url = generate_url_string()
while MainObj.objects.filter(short_url=short_url).count():
short_url = generate_url_string()
return short_url
I want to show in the unit test that the function 'generate_short_url' doesn't return repeated values if some objects in the system have similar short_urls. I mocked 'generate_url_string' with predefined return result for this purpose.
The problem is that I couldn't limit number of calls of mocked function with this value, and as a result the code goes to an infinite loop.
I would like to call my function with predefined result ('a1b2c3') only once. After that I want function to work as usual. Something like this:
with unittest.mock.patch('prj.apps.app.models.base.generate_url_string', return_value="a1b2c3", times_to_call=1) as mocked_generate_url_string:
obj.generate_short_url()
But I see no any attributes like 'times_to_call' in a mocking library.
Is there any way to handle that ?
Define a generator that first yields the fixed value, then yields the return value of the real function (which is passed as an argument to avoid calling the patched value).
def mocked(x):
yield "a1b2c3"
while True:
yield x()
Then, use the generator as the side effect of the patched function.
with unittest.mock.patch(
'prj.apps.app.models.base.generate_url_string',
side_effect=mocked(prj.apps.app.models.base.generate_url_string)) as mocked_generate_url_string:
obj.generate_short_url()
Related
I have a simple module (no classes, just utility functions) where a function foo() calls a number of functions from the same module, like this:
def get_config(args):
...
return config_dictionary
def get_objs(args):
...
return list_of_objects
def foo(no_run=False):
config = get_config(...)
if no_run:
return XYZ
objs = get_objects(config)
for obj in objs:
obj.work()
... # number of other functions from the same module called
Is it possible to use Python Mockito to verify that get_config() was the last function called from my module in foo() ? (for certain arguments)
Currently this is verified in this way:
spy2(mymodule.get_config)
spy2(mymodule.get_objects)
assert foo(no_run=True) == XYZ
verify(mymodule).get_config(...)
# Assumes that get_objects() is the first function to be called
# in foo() after the configuration is retrieved.
verify(mymodule, times=0).get_objects(...)
Perhaps something like generating the spy() and verify() calls dynamically ? Rewrite the module into a class and stub the whole class ?
Basically, I do not like the assumption of the test - the code in foo() can be reordered and the test would still pass.
That's not your real code, and then it is often not describing the real problem you have here. If for example you don't expect a function is called at all, like get_objects in your case, then why begin with spy2 in the first place. expect(<module>, times=0).<fn>(...) reads better in that case, and a subsequent verify is not needed.
There is verifyNoMoreInteractions(<module>) and inorder.verify testing. But all this is guessing as you don't tell how XYZ is computed. (Basically why spy2(get_config) and not a when call here. T.i. why calling the original implementation and not mocking the answer?)
Issue: I have 2 functions that both require the same nested functions to operate so they're currently copy-pasted into each function. These functions cannot be combined as the second function relies on calling the first function twice. Unnesting the functions would result in the addition of too many parameters.
Question: Is it better to run the nested functions in the first function and append their values to an object to be fed into the 2nd function, or is it better to copy and paste the nested functions?
Example:
def func_A(thing):
def sub_func_A(thing):
thing += 1
return sub_func_A(thing)
def func_B(thing):
def sub_func_B(thing):
thing += 1
val_A, val_B = func_A(5), func_A(5)
return sub_func_B(val_A), sub_func_B(val_B)
Imagine these functions couldn't be combined and the nested function relied on so many parameters that moving it outside and calling it would be too cluttered
The "better option" depends on a few factors -:
The type of optimization you want to achieve.
The time taken by the functions to execute.
If the type of optimization to be achieved here is based on the time taken to execute the second function in the two cases, then it depends on the time taken for the nested function to fully execute, if that time is less than the time taken to store it's output when it's first called by the first function then its better copy pasting them.
While, if the time taken by the nested function to execute is more than the time taken to store it's output, then its a better option to execute it first time and then store it's output for future use.
Further, As mentioned by #DarylG in the comments, a class based approach can also be used wherein the nested function(subfunction) can be a private function(only accessible by the class's inner components), while the two functions(func_A and func_B) can be public thus allowing them to be used and accessed widely from the outside as well. If implemented in code it might look something like this -:
class MyClass() :
def __init__(self, ...) :
...
return
def __subfunc(self, thing) :
# PRIVATE SUBFUNC
thing += 1
return thing
def func_A(self, thing):
# PUBLIC FUNC A
return self.__subfunc(thing)
def func_B(self, thing):
# PUBLIC FUNC B
val_A, val_B = self.func_A(5), self.func_A(5)
return self.__subfunc(val_A), self.__subfunc(val_B)
I would like a way to limit the calling of a function to once per values of parameters.
For example
def unique_func(x):
return x
>>> unique_func([1])
[1]
>>> unique_func([1])
*** wont return anything ***
>>> unique_func([2])
[2]
Any suggestions? I've looked into using memoization but not established a solution just yet.
This is not solved by the suggested Prevent a function from being called twice in a row since that only solves when the previous func call had them parameters.
Memoization uses a mapping of arguments to return values. Here, you just want a mapping of arguments to None, which can be handled with a simple set.
def idempotize(f):
cache = set()
def _(x):
if x in cache:
return
cache.add(x)
return f(x)
return _
#idempotize
def unique_fun(x):
...
With some care, this can be generalized to handle functions with multiple arguments, as long as they are hashable.
def idempotize(f):
cache = set()
def _(*args, **kwargs):
k = (args, frozenset(kwargs.items()))
if k in cache:
return
return f(*args, **kwargs)
return _
Consider using the built-in functools.lru_cache() instead of rolling your own.
It won't return nothing on the second function call with the same arugments (it will return the same thing as the first function call) but maybe you can live with that. It would seem like a negligible price to pay, compared to the advantages of using something that's maintained as part of the standard library.
Requires your argument x to be hashable, so won't work with lists. Strings are fine.
from functools import lru_cache
#lru_cache()
def unique_fun(x):
...
I've built a function decorator to handle this scenario, that limits function calls to the same function in a given timeframe.
You can directly use it via PyPI with pip install ofunctions.threading or checkout the github sources.
Example: I want to limit calls to the same function with the same parameters to one call per 10 seconds:
from ofunctions.threading import no_flood
#no_flood(10)
def my_function():
print("It's me, the function")
for _ in range(0, 5):
my_function()
# Will print the text only once.
if after 10 seconds the function is called again, we'll allow a new execution, but will prevent any other execution for the next 10 seconds.
By default #no_flood will limit function calls with the same parameter, so that calling func(1) and func(2) are still allowed concurrently.
The #no_flood decorator can also limit all function calls to a given function regardless of it's parameters:
from ofunctions.threading import no_flood
#no_flood(10, False)
def my_function(var):
print("It's me, function number {}".format(var))
for i in range(0, 5):
my_function(i)
# Will only print function text once
I'm patching in my test (python2.7):
args[1].return_value.getMarkToMarketReportWithSummary.return_value = ([], {})
and I can see the
the expected mocked method with the correct return value when debugging:
and calling it is all good:
But, the method has multiple arguments:
rows, summary = manager.getMarkToMarketReportWithSummary(
portfolios, report_data_map, account,
...
include_twrr=self.__include_twrr)
and when the test runner calls the method it fails and returns a MagicMock instead of expected above. It's because of the arguments, making the method call a string or something. It looks like this:
so the method name looks the same but it has the \n with the args, etc. What is this? Is it an onion? Because it is making me cry.
Evaluating it a after that gives one more attribute, this time with #LINE#, because, you know, rubbing salt in my eyes is its goal:
:_(
I'm using Mock (http://www.voidspace.org.uk/python/mock/mock.html), and came across a particular mock case that I cant figure out the solution.
I have a function with multiple calls to some_function that is being Mocked.
def function():
some_function(1)
some_function(2)
some_function(3)
I only wanna mock the first and third call to some_function. The second call I wanna to be made to the real some_function.
I tried some alternatives with http://www.voidspace.org.uk/python/mock/mock.html#mock.Mock.mock_calls, but with no success.
Thanks in advance for the help.
It seems that the wraps argument could be what you want:
wraps: Item for the mock object to wrap. If wraps is not None then calling the
Mock will pass the call through to the wrapped object (returning the
real result and ignoring return_value).
However, since you only want the second call to not be mocked, I would suggest the use of mock.side_effect.
If side_effect is an iterable then each call to the mock will return
the next value from the iterable.
If you want to return a different value for each call, it's a perfect fit :
somefunction_mock.side_effect = [10, None, 10]
Only the first and third calls to somefunction will return 10.
However, if you do need to call the real function, but not the second time, you can also pass side_effect a callable, but I find it pretty ugly (there might be a smarter to do it):
class CustomMock(object):
calls = 0
def some_function(self, arg):
self.calls += 1
if self.calls != 2:
return my_real_function(arg)
else:
return DEFAULT
somefunction_mock.side_effect = CustomMock().some_function
Even simpler than creating a CustomMock class :
def side_effect(*args, **kwargs):
if side_effect.counter < 10:
side_effect.counter += 1
return my_real_function(arg)
else:
return DEFAULT
side_effect.counter = 0
mocked_method.side_effect = side_effect
I faced the same situation today. After some hesitation I found a different way to work around it.
At first, I have a function looks like this:
def reboot_and_balabala(args):
os.system('do some prepare here')
os.system('reboot')
sys.exit(0)
I want the first call to os.system be invoked, otherwise the local file is not generated, and I cannot verify it.
But I really do not want the second call to os.system be invoked, lol.
At first, I have an unittest similar to:
def test_reboot_and_balabala(self):
with patch.object(os, 'system') as mock_system:
# do some mock setup on mock_system, this is what I looked for
# but I do not found any easy and clear solution
with patch.object(sys, 'exit') as mock_exit:
my_lib.reboot_and_balabala(...)
# assert mock invoke args
# check generated files
But finally, I realized, after adjusting the code, I have a more better code structure, and unittests, by following way:
def reboot():
os.system('reboot')
sys.exit(0)
def reboot_and_balabala(args):
os.system('do some prepare here')
reboot()
And then we can test those code by:
def test_reboot(self):
with patch.object(os, 'system') as mock_system:
with patch.object(sys, 'exit') as mock_exit:
my_lib.reboot()
mock_system.assert_called_once_with('reboot')
mock_exit.assert_called_once_with(0)
def test_reboot_and_balabala(self):
with patch.object(my_lib, 'reboot') as mock_reboot:
my_lib.reboot_and_balabala(...)
# check generated files here
mock_reboot.assert_called_once()
This is not a direct answer to the question. But I think this is very inspiring.