Mocking decorator with that uses hardcoded global variable - python

When trying to unittest the below seen code snippet i get limited by the timing limit that the decorator that wraps calc_something functions puts to me. It seems that I cant override RAND_RATE on my unittests since then I import the module containing my implementation the decorators have already wrapped my function. How can I solve that issue?
RAND_RATE=20
RAND_PERIOD=10
#limits(calls=RAND_RATE, period=RAND_PERIOD)
def calc_something():
...

Without knowing exactly what limits does, we don't know what (if anything) can be patched. Instead, leave the base implementation undecorated for use by unit test. calc_something will be saved as the separate result of applying limits manually.
RAND_RATE=20
RAND_PERIOD=10
def _do_calc():
...
calc_something = limits(calls=RAND_RATE, period=RAND_PERIOD)(_do_calc)
#limits(calls=RAND_RATE, period=RAND_PERIOD)
def calc_something():
...
Now in your tests, you can define any decorated version you like:
test_me = limits(10, 5)(my_module._do_calc)

Related

Python testing: mocking a function that's imported AND used inside another funciton

Due to circular-import issues which are common with Celery tasks in Django, I'm often importing Celery tasks inside of my methods, like so:
# some code omitted for brevity
# accounts/models.py
def refresh_library(self, queue_type="regular"):
from core.tasks import refresh_user_library
refresh_user_library.apply_async(
kwargs={"user_id": self.user.id}, queue=queue_type
)
return 0
In my pytest test for refresh_library, I'd only like to test that refresh_user_library (the Celery task) is called with the correct args and kwargs. But this isn't working:
# tests/test_accounts_models.py
#mock.patch("accounts.models.UserProfile.refresh_library.refresh_user_library")
def test_refresh_library():
Error is about refresh_library not having an attribute refresh_user_library.
I suspect this is due to the fact that the task(refresh_user_library) is imported inside the function itself, but I'm not too experienced with mocking so this might be completely wrong.
Even though apply_async is your own-created function in your core.tasks, if you do not want to test it but only make sure you are giving correct arguments, you need to mock it. In your question you're mocking wrong package. You should do:
# tests/test_accounts_models.py
#mock.patch("core.tasks.rehresh_user_library.apply_sync")
def test_refresh_library():
In your task function, refresh_user_library is a local name, not an attribute of the task. What you want is the real qualified name of the function you want to mock:
#mock.patch("core.tasks.refresh_user_library")
def test_refresh_library():
# you test here

How to log the name of the test class, if the test method resides in a class common for all tests?

I have the following project structure:
/root
/tests
common_test_case.py
test_case_1.py
test_case_2.py
...
project_file.py
...
Every test test_case_... is inherited from both unittest.TestCase and common_test_case.CommonTestCase. Class CommonTestCase contains test methods that should be executed by all the tests (though using data unique to each test, stored and accessed in self.something of the test). If some specific tests are needed for an exact test case, they are added directly to that particular class.
Currently I am working on adding logging to my tests. Among other things I would like to log the class the method was run from (since the approach above implies the same test method name for different classes). I would like to stick with the built-in logging module to achieve this.
I have tried the following LogRecord attributes:%(filename)s, %(module)s, %(pathname)s. Though, for methods defined in common_test_case.py they all return path/name to the common_test_case.py and not the test module they were actually run from.
My questions are:
Is there a way to achieve what I am trying to, using only built-in logging module?
Using some third-party/other module (I was thinking maybe some "hacky" solution with inspect)?
Is it possible to achieve (in Python) at all?
Your question appears similar to this one, and solved by:
self.id()
See the function definition here, which calls self.__class__ for the instance of the TestCase class that is instantiated. Given that you are using multiple inheritance the multiple inheritance rules from Python apply:
For most purposes, in the simplest cases, you can think of the search for attributes inherited from a parent class as depth-first, left-to-right, not searching twice in the same class where there is an overlap in the hierarchy.
Which means that common_test_case.CommonTestCase will be searched then unittest.TestCase. If there is no id function in common_test_case.CommonTestCase things should work as if it is only derived from unittest.TestCase. If you feel the need to add an id function to the CommonTestCase, something like this (if really necessary):
def id(self):
if issubclass(self,unittest.TestCase):
return super(unittest.TestCase,self).id()
The solution I've found (that does the trick, so far):
import inspect
class_called_from = inspect.stack()[1][0].f_locals['self'].__class__.__name__
I'm still wondering, though, if there is a "clearer" method, or if this is possible to achieve using logging module.
Recipes, based on West's answer (tested on Python 3.6.1):
test_name = self.id().split('.')[-1]
class_called_from = self.id().split('.')[-2]

Python: know function is called from a unittest?

Is there a way to know in Python if a function is called from the context of a unittest execution or a debugging run?
For the context, I am trying to unittest a code where I use functions that perform a database call. In order to avoid database calls during the test of that function (DB calls are tested separately), I am trying to make the DB IO functions aware of their environement and to mock when they are called within a unittest and log additional variables during a debug run.
My current aproach is to read/write environment variables, but it seems a little bit of an overkill and I think Python must have a better mechanism for that.
Edit:
Here is the example of a function I am trying to unittest:
from Database_IO import Database_read
def some_function(significance_level, time_range)
data = Database_read(time_range)
significant_data = data > significance_level
return significant_data
In my opinion, if you write your function to behave in a different way when tested, you are not really testing it.
To test the function I'd mock.patch() the database object, and then check it has used correctly in your function.
The most difficult thing when you start using the mock library is to find the correct object to replace.
In your example, if in your_module you import the Database_read object from the Database_IO module, you can test it by using a code similar to the following
with mock.patch('your_module.Database_read') as dbread_mock:
# prepare the dbread_mock
dbread_mock.return_value = 10
# execute a test call
retval = some_function(3, 'some range')
# check the result
dbread_mock.assert_called_with('some range')

A DRY way of writing similar unit tests in python

I have some similar unit tests in python.
There are so similar that only one argument is changing.
class TestFoo(TestCase):
def test_typeA(self):
self.assertTrue(foo(bar=TYPE_A))
def test_typeB(self):
self.assertTrue(foo(bar=TYPE_B))
def test_typeC(self):
self.assertTrue(foo(bar=TYPE_C))
...
Obviously this is not very DRY, and if you have even 4-5 different options the code is going to be very repetitive
Now I could do something like this
class TestFoo(TestCase):
BAR_TYPES = (
TYPE_A,
TYPE_B,
TYPE_C,
...
)
def _foo_test(self, bar_type):
self.assertTrue(foo(bar=bar_type))
def test_foo_bar_type(self):
for bar_type in BAR_TYPES:
_foo_test(bar=bar_type))
Which works, however when an exception gets raised, how will I know whether _foo_test failed with argument TYPE_A, TYPE_B or TYPE_C ?
Perhaps there is a better way of structuring these very similar tests?
What are you trying to do is essentially a parameterized test. This feature isn't included in standard django or python unittest modules, but a number of libs provide it: nose-parameterized, py.test, ddt
My favorite so far is ddt: it resembles NUnit-JUnit style parameterized tests most, pretty lightweight, don't get in your way and does not require dedicated test runner (like nose-parameterized do). The way it can help you is that it modifies test name to include all parameters, so you would clearly see which test case failed by looking at a test name.
With ddt your example would look like this:
import ddt
#ddt.ddt
class TestProcessCreateAgencyOfferAndDispatch(TestCase):
#ddt.data(TYPE_A, TYPE_B, TYPE_C)
def test_foo_bar_type(self, type):
self.assertTrue(foo(bar=type))
In such case names will look like test_foo_bar_type__TYPE_A (technically, it constructs it something like [test_name]__[repr(parameter_1)]__[repr(parameter_2)]).
As a bonus, it is much cleaner (no helper method), and you get three methods instead of one. The advantage here is that you can test various code paths in a method and get one test case per each path (but a certain amount of thinking is needed, sometimes it's better to have a dedicated test for some of code paths)
Most TestCase assertion methods, including assertTrue, take an optional msg argument.
If you change your BAR_TYPES tuple to include the variable names, then you can include this in the message that is shown when the assertion fails.
class TestProcessCreateAgencyOfferAndDispatch(TestCase):
BAR_TYPES = (
('TYPE_A', TYPE_A),
('TYPE_B', TYPE_B),
('TYPE_C', TYPE_C),
...
)
def _foo_test(self, var_name, bar_type):
self.assertTrue(foo(bar=bar_type), var_name)
def test_foo_bar_type(self):
for (var_name, bar_type) in BAR_TYPES:
_foo_test(bar=bar_type), var_name=var_name)

how to mock function call used by imported pypi library in python

I have the following code that I'm trying to test:
great_report.py
from retry import retry
#retry((ReportNotReadyException), tries=3, delay=10, backoff=3)
def get_link(self):
report_link = _get_report_link_from_3rd_party(params)
if report_link:
return report_link
else:
stats.count("report_not_ready", 1)
raise ReportNotReadyException
I've got my testing function which mocks _get_report_link_from_3rd_party which tests everything but I don't want this function to actually pause execution during when I run tests..
#mock.patch('repo.great_report._get_report_link_from_3rd_party', return_value=None)
test_get_link_raises_exception(self, mock_get_report_link):
self.assertRaises(ReportNotReadyException, get_link)
I tried mocking the retry parameters but am running into issues where get_link keeps retrying over and over which causes long build times instead of just raising the exception and continuing. How can I mock the parameters for the #retry call in my test?
As hinted here, an easy way to prevent the actual sleeping is by patching the time.sleep function. Here is the code that did that for me:
#patch('time.sleep', side_effect = lambda _: None)
There is no way to change decorators parameters after load the module. Decorators decorate the original function and change it at the module load time.
First I would like encourage you to change your design a little to make it more testable.
If you extract the body of get_link() method test the new method and trust retry decorator you will obtain your goal.
If you don't want add a new method to your class you can use a config module that store variables that you use when call retry decorator. After that you can use two different module for testing and production.
The last way is the hacking way where you replace retry.api.__retry_internal by a your version that invoke the original one by changing just the variables:
import unittest
from unittest.mock import *
from pd import get_link, ReportNotReadyException
import retry
orig_retry_internal = retry.api.__retry_internal
def _force_retry_params(new_tries=-1, new_delay=0, new_max_delay=None, new_backoff=1, new_jitter=0):
def my_retry_internals(f, exceptions, tries, delay, max_delay, backoff, jitter, logger):
# call original __retry_internal by new parameters
return orig_retry_internal(f, exceptions, tries=new_tries, delay=new_delay, max_delay=new_max_delay,
backoff=new_backoff, jitter=new_jitter, logger=logger)
return my_retry_internals
class MyTestCase(unittest.TestCase):
#patch("retry.api.__retry_internal", side_effect=_force_retry_params(new_tries=1))
def test_something(self, m_retry):
self.assertRaises(ReportNotReadyException, get_link, None)
IMHO you should use that hacking solution only if you are with the back on the wall and you have no chance to redesign you code to make it more testable. The internal function/class/method can change without notice and your test can be difficult to maintain in the future.

Categories