How to disable a try/except block during testing? - python

I wrote a cronjob that iterates through a list of accounts and performs some web call for them (shown below):
for account in self.ActiveAccountFactory():
try:
self.logger.debug('Updating %s', account.login)
self.update_account_from_fb(account)
self.check_balances()
self.check_rois()
except Exception,e:
self.logger.exception(traceback.format_exc())
Because this job is run by heroku one every 10 minutes, I do not want the entire job to fail just because one account is running into issues (it happens). I placed a try catch clause here so that this task is "fault-tolerant".
However, I noticed that when I am testing, this try/catch block is giving me cryptic problems because of the task is allowed to continue executing even though there is some serious error.
What is the best way to disable a try/except block during testing?
I've though about implementing the code directly like this:
for account in self.ActiveAccountFactory():
self.logger.debug('Updating %s', account.login)
self.update_account_from_fb(account)
self.check_balances()
self.check_rois()
self.logger.exception(traceback.format_exc())
in my test cases but then this makes my tests very clumsy as I am copying large amounts of code over.
What should I do?

First of all: don't swallow all exceptions using except Exception. It's bad design. So cut it out.
With that out of the way:
One thing you could do is setup a monkeypatch for the logger.exception method. Then you can handle the test however you see fit based on whether it was called, whether it's creating a mock logger, or a separate testing logger, or a custom testing logger class that stops the tests when certain exceptions occur. You could even choose to end the testing immediately by raising an error.
Here is an example using pytest.monkeypatch. I like pytest's way of doing this because they already have a predefined fixture setup for it, and no boilerplate code is required. However, there are others ways to do this as well (such as using unittest.mock.patch as part of the unitest module).
I will call your class SomeClass. What we will do is create a patched version of your SomeClass object as a fixture. The patched version will not log to the logger; instead, it will have a mock logger. Anything that happens to the logger will be recorded in the mock logger for inspection later.
import pytest
import unittest.mock as mock # import mock for Python 2
#pytest.fixture
def SomeClassObj_with_patched_logger(monkeypatch):
##### SETUP PHASE ####
# create a basic mock logger:
mock_logger = mock.Mock(spec=LoggerClass)
# patch the 'logger' attribute so that when it is called on
# 'some_class_instance' (which is bound to 'self' in the method)
# things are re-routed to mock_logger
monkeypatch.setattr('some_class_instance.logger', mock_logger)
# now create class instance you will test with the same name
# as the patched object
some_class_instance = SomeClass()
# the class object you created will now be patched
# we can now send that patched object to any test we want
# using the standard pytest fixture way of doing things
yield some_class_instance
###### TEARDOWN PHASE #######
# after all tests have been run, we can inspect what happened to
# the mock logger like so:
print('\n#### ', mock_logger.method_calls)
If call.exception appears in the method calls of the mock logger, you know that method was called. There are a lot of other ways you could handle this as well, this is just one.
If you're using the logging module, LoggerClass should just be logging.Logger. Alternatively, you can just do mock_logger = mock.Mock(). Or, you could create your own custom testing logger class that raises an exception when its exception method is called. The sky is the limit!
Use your patched object in any test like so:
def test_something(SomeClassObj_with_patched_logger):
# no need to do the line below really, just getting
# a shorter variable name
my_obj = SomeClassObj_with_patched_logger
#### DO STUFF WITH my_obj #####
If you are not familiar with pytest, see this training video for a little bit more in depth information.

try...except blocks are difficult when you are testing because they catch and try to dispose of errors you would really rather see. As you have found out. While testing, for
except Exception as e:
(don't use Exception,e, it's not forward-compatible) substitute an exception type that is really unlikely to occur in your circumstances, such as
except AssertionError as e:
A text editor will do this for you (and reverse it afterwards) at the cost of a couple of mouse-clicks.

You can make callables test-aware by add a _testing=False parameter. Use that to code alternate pathways in the callable for when testing. Then pass _testing=True when calling from a test file.
For the situation presented in this question, putting if _testing: raise in the exception body would 'uncatch' the exception.
Conditioning module level code is tricker. To get special behavior when testing module mod in package pack, I put
_testing = False # in `pack.__init__`
from pack import _testing # in pack.mod
Then test_mod I put something like:
import pack
pack._testing = True
from pack import mod

Related

Python mock patch tests work separately but not combined

I have written units test using unittest.mock and I am facing some issues with running tests together and singularly. I am mocking jira.JIRA from the JIRA SDK (on PyPI). My tests look something like:
import ...
class MyTests(BaseTest):
setUp ...
tearDown ...
#mock.patch('os.system')
#mock.patch('jira.JIRA')
def test_my_lambda(self, mock_jira, mock_os) -> None:
# run some code
mock_jira().transition_issue.assert_called_once()
mock_jira().transition_issue.assert_called_with(param1, param2)
# other mock conditions
Now of course from my understanding, the MagicMock returned and I can do sub calls on it etc and I can still call assertions on those sub functions etc. This test runs in isolation, however when running many tests together it says that the transition_issue call was not called, despite me going through the code and seeing it is called.
Am I missing something?
I tried to look into resetting the mock after each test, but I assumed the patch took care of that.
I also tried to to do imports relating to the jira.JIRA within the function (anything which imports my class which has the jira.JIRA() call in), but this did not work.
The fact it works as a single test is confusing me.

Python testing: mocking a function that's imported AND used inside another funciton

Due to circular-import issues which are common with Celery tasks in Django, I'm often importing Celery tasks inside of my methods, like so:
# some code omitted for brevity
# accounts/models.py
def refresh_library(self, queue_type="regular"):
from core.tasks import refresh_user_library
refresh_user_library.apply_async(
kwargs={"user_id": self.user.id}, queue=queue_type
)
return 0
In my pytest test for refresh_library, I'd only like to test that refresh_user_library (the Celery task) is called with the correct args and kwargs. But this isn't working:
# tests/test_accounts_models.py
#mock.patch("accounts.models.UserProfile.refresh_library.refresh_user_library")
def test_refresh_library():
Error is about refresh_library not having an attribute refresh_user_library.
I suspect this is due to the fact that the task(refresh_user_library) is imported inside the function itself, but I'm not too experienced with mocking so this might be completely wrong.
Even though apply_async is your own-created function in your core.tasks, if you do not want to test it but only make sure you are giving correct arguments, you need to mock it. In your question you're mocking wrong package. You should do:
# tests/test_accounts_models.py
#mock.patch("core.tasks.rehresh_user_library.apply_sync")
def test_refresh_library():
In your task function, refresh_user_library is a local name, not an attribute of the task. What you want is the real qualified name of the function you want to mock:
#mock.patch("core.tasks.refresh_user_library")
def test_refresh_library():
# you test here

Monkey patching values of a function

I have a small function as follows:
def write_snapshot_backup_monitoring_values():
try:
snapshot_backup_result = 'my result'
with open(config.MONITOR_SNAPSHOT_BACKUP_FILE, "w") as snapshot_backup_file:
snapshot_backup_file.write(snapshot_backup_result)
except Exception as exception:
LOG.exception(exception)
where config.MONITOR_SNAPSHOT_BACKUP_FILE is declared in a config file with value = /home/result.log
when I try to write a test case using pytest and I call this function as follows:
constants.MONITOR_SNAPSHOT_BACKUP_FILE = "/tmp/result.log"
#pytest.mark.functional_test
def test_write_snapshot_backup_monitoring_values():
utils.write_snapshot_backup_monitoring_values()...
I want to monkey patch the value for config.MONITOR_SNAPSHOT_BACKUP_FILE with constants.MONITOR_SNAPSHOT_BACKUP_FILE which I have declared in the test case file. Basically I want that while runnning the test case it should create /tmp/result.log and not /home/result.log How can I do that? I am new to monkey patching in python.
You don't clear up what config is, so I assume it is another module you have imported. There's no specific technique for monkey-patching, you just assign the value. It's just a name for adding/modifying attributes at runtime.
config.MONITOR_SNAPSHOT_BACKUP_FILE = constants.MONITOR_SNAPSHOT_BACKUP_FILE
However, there's one thing to keep in mind here: Python caches imported modules. If you change this value, it will change for other python modules that have imported config and run in the same runtime. So, be careful that you don't cause any side effects.

Python: know function is called from a unittest?

Is there a way to know in Python if a function is called from the context of a unittest execution or a debugging run?
For the context, I am trying to unittest a code where I use functions that perform a database call. In order to avoid database calls during the test of that function (DB calls are tested separately), I am trying to make the DB IO functions aware of their environement and to mock when they are called within a unittest and log additional variables during a debug run.
My current aproach is to read/write environment variables, but it seems a little bit of an overkill and I think Python must have a better mechanism for that.
Edit:
Here is the example of a function I am trying to unittest:
from Database_IO import Database_read
def some_function(significance_level, time_range)
data = Database_read(time_range)
significant_data = data > significance_level
return significant_data
In my opinion, if you write your function to behave in a different way when tested, you are not really testing it.
To test the function I'd mock.patch() the database object, and then check it has used correctly in your function.
The most difficult thing when you start using the mock library is to find the correct object to replace.
In your example, if in your_module you import the Database_read object from the Database_IO module, you can test it by using a code similar to the following
with mock.patch('your_module.Database_read') as dbread_mock:
# prepare the dbread_mock
dbread_mock.return_value = 10
# execute a test call
retval = some_function(3, 'some range')
# check the result
dbread_mock.assert_called_with('some range')

how to mock function call used by imported pypi library in python

I have the following code that I'm trying to test:
great_report.py
from retry import retry
#retry((ReportNotReadyException), tries=3, delay=10, backoff=3)
def get_link(self):
report_link = _get_report_link_from_3rd_party(params)
if report_link:
return report_link
else:
stats.count("report_not_ready", 1)
raise ReportNotReadyException
I've got my testing function which mocks _get_report_link_from_3rd_party which tests everything but I don't want this function to actually pause execution during when I run tests..
#mock.patch('repo.great_report._get_report_link_from_3rd_party', return_value=None)
test_get_link_raises_exception(self, mock_get_report_link):
self.assertRaises(ReportNotReadyException, get_link)
I tried mocking the retry parameters but am running into issues where get_link keeps retrying over and over which causes long build times instead of just raising the exception and continuing. How can I mock the parameters for the #retry call in my test?
As hinted here, an easy way to prevent the actual sleeping is by patching the time.sleep function. Here is the code that did that for me:
#patch('time.sleep', side_effect = lambda _: None)
There is no way to change decorators parameters after load the module. Decorators decorate the original function and change it at the module load time.
First I would like encourage you to change your design a little to make it more testable.
If you extract the body of get_link() method test the new method and trust retry decorator you will obtain your goal.
If you don't want add a new method to your class you can use a config module that store variables that you use when call retry decorator. After that you can use two different module for testing and production.
The last way is the hacking way where you replace retry.api.__retry_internal by a your version that invoke the original one by changing just the variables:
import unittest
from unittest.mock import *
from pd import get_link, ReportNotReadyException
import retry
orig_retry_internal = retry.api.__retry_internal
def _force_retry_params(new_tries=-1, new_delay=0, new_max_delay=None, new_backoff=1, new_jitter=0):
def my_retry_internals(f, exceptions, tries, delay, max_delay, backoff, jitter, logger):
# call original __retry_internal by new parameters
return orig_retry_internal(f, exceptions, tries=new_tries, delay=new_delay, max_delay=new_max_delay,
backoff=new_backoff, jitter=new_jitter, logger=logger)
return my_retry_internals
class MyTestCase(unittest.TestCase):
#patch("retry.api.__retry_internal", side_effect=_force_retry_params(new_tries=1))
def test_something(self, m_retry):
self.assertRaises(ReportNotReadyException, get_link, None)
IMHO you should use that hacking solution only if you are with the back on the wall and you have no chance to redesign you code to make it more testable. The internal function/class/method can change without notice and your test can be difficult to maintain in the future.

Categories