Python mock patch tests work separately but not combined - python

I have written units test using unittest.mock and I am facing some issues with running tests together and singularly. I am mocking jira.JIRA from the JIRA SDK (on PyPI). My tests look something like:
import ...
class MyTests(BaseTest):
setUp ...
tearDown ...
#mock.patch('os.system')
#mock.patch('jira.JIRA')
def test_my_lambda(self, mock_jira, mock_os) -> None:
# run some code
mock_jira().transition_issue.assert_called_once()
mock_jira().transition_issue.assert_called_with(param1, param2)
# other mock conditions
Now of course from my understanding, the MagicMock returned and I can do sub calls on it etc and I can still call assertions on those sub functions etc. This test runs in isolation, however when running many tests together it says that the transition_issue call was not called, despite me going through the code and seeing it is called.
Am I missing something?
I tried to look into resetting the mock after each test, but I assumed the patch took care of that.
I also tried to to do imports relating to the jira.JIRA within the function (anything which imports my class which has the jira.JIRA() call in), but this did not work.
The fact it works as a single test is confusing me.

Related

Python testing: mocking a function that's imported AND used inside another funciton

Due to circular-import issues which are common with Celery tasks in Django, I'm often importing Celery tasks inside of my methods, like so:
# some code omitted for brevity
# accounts/models.py
def refresh_library(self, queue_type="regular"):
from core.tasks import refresh_user_library
refresh_user_library.apply_async(
kwargs={"user_id": self.user.id}, queue=queue_type
)
return 0
In my pytest test for refresh_library, I'd only like to test that refresh_user_library (the Celery task) is called with the correct args and kwargs. But this isn't working:
# tests/test_accounts_models.py
#mock.patch("accounts.models.UserProfile.refresh_library.refresh_user_library")
def test_refresh_library():
Error is about refresh_library not having an attribute refresh_user_library.
I suspect this is due to the fact that the task(refresh_user_library) is imported inside the function itself, but I'm not too experienced with mocking so this might be completely wrong.
Even though apply_async is your own-created function in your core.tasks, if you do not want to test it but only make sure you are giving correct arguments, you need to mock it. In your question you're mocking wrong package. You should do:
# tests/test_accounts_models.py
#mock.patch("core.tasks.rehresh_user_library.apply_sync")
def test_refresh_library():
In your task function, refresh_user_library is a local name, not an attribute of the task. What you want is the real qualified name of the function you want to mock:
#mock.patch("core.tasks.refresh_user_library")
def test_refresh_library():
# you test here

How to disable a try/except block during testing?

I wrote a cronjob that iterates through a list of accounts and performs some web call for them (shown below):
for account in self.ActiveAccountFactory():
try:
self.logger.debug('Updating %s', account.login)
self.update_account_from_fb(account)
self.check_balances()
self.check_rois()
except Exception,e:
self.logger.exception(traceback.format_exc())
Because this job is run by heroku one every 10 minutes, I do not want the entire job to fail just because one account is running into issues (it happens). I placed a try catch clause here so that this task is "fault-tolerant".
However, I noticed that when I am testing, this try/catch block is giving me cryptic problems because of the task is allowed to continue executing even though there is some serious error.
What is the best way to disable a try/except block during testing?
I've though about implementing the code directly like this:
for account in self.ActiveAccountFactory():
self.logger.debug('Updating %s', account.login)
self.update_account_from_fb(account)
self.check_balances()
self.check_rois()
self.logger.exception(traceback.format_exc())
in my test cases but then this makes my tests very clumsy as I am copying large amounts of code over.
What should I do?
First of all: don't swallow all exceptions using except Exception. It's bad design. So cut it out.
With that out of the way:
One thing you could do is setup a monkeypatch for the logger.exception method. Then you can handle the test however you see fit based on whether it was called, whether it's creating a mock logger, or a separate testing logger, or a custom testing logger class that stops the tests when certain exceptions occur. You could even choose to end the testing immediately by raising an error.
Here is an example using pytest.monkeypatch. I like pytest's way of doing this because they already have a predefined fixture setup for it, and no boilerplate code is required. However, there are others ways to do this as well (such as using unittest.mock.patch as part of the unitest module).
I will call your class SomeClass. What we will do is create a patched version of your SomeClass object as a fixture. The patched version will not log to the logger; instead, it will have a mock logger. Anything that happens to the logger will be recorded in the mock logger for inspection later.
import pytest
import unittest.mock as mock # import mock for Python 2
#pytest.fixture
def SomeClassObj_with_patched_logger(monkeypatch):
##### SETUP PHASE ####
# create a basic mock logger:
mock_logger = mock.Mock(spec=LoggerClass)
# patch the 'logger' attribute so that when it is called on
# 'some_class_instance' (which is bound to 'self' in the method)
# things are re-routed to mock_logger
monkeypatch.setattr('some_class_instance.logger', mock_logger)
# now create class instance you will test with the same name
# as the patched object
some_class_instance = SomeClass()
# the class object you created will now be patched
# we can now send that patched object to any test we want
# using the standard pytest fixture way of doing things
yield some_class_instance
###### TEARDOWN PHASE #######
# after all tests have been run, we can inspect what happened to
# the mock logger like so:
print('\n#### ', mock_logger.method_calls)
If call.exception appears in the method calls of the mock logger, you know that method was called. There are a lot of other ways you could handle this as well, this is just one.
If you're using the logging module, LoggerClass should just be logging.Logger. Alternatively, you can just do mock_logger = mock.Mock(). Or, you could create your own custom testing logger class that raises an exception when its exception method is called. The sky is the limit!
Use your patched object in any test like so:
def test_something(SomeClassObj_with_patched_logger):
# no need to do the line below really, just getting
# a shorter variable name
my_obj = SomeClassObj_with_patched_logger
#### DO STUFF WITH my_obj #####
If you are not familiar with pytest, see this training video for a little bit more in depth information.
try...except blocks are difficult when you are testing because they catch and try to dispose of errors you would really rather see. As you have found out. While testing, for
except Exception as e:
(don't use Exception,e, it's not forward-compatible) substitute an exception type that is really unlikely to occur in your circumstances, such as
except AssertionError as e:
A text editor will do this for you (and reverse it afterwards) at the cost of a couple of mouse-clicks.
You can make callables test-aware by add a _testing=False parameter. Use that to code alternate pathways in the callable for when testing. Then pass _testing=True when calling from a test file.
For the situation presented in this question, putting if _testing: raise in the exception body would 'uncatch' the exception.
Conditioning module level code is tricker. To get special behavior when testing module mod in package pack, I put
_testing = False # in `pack.__init__`
from pack import _testing # in pack.mod
Then test_mod I put something like:
import pack
pack._testing = True
from pack import mod

Django Test Case Can't run method

I am just getting started with Django, so this may be something stupid, but I am not even sure what to google at this point.
I have a method that looks like this:
def get_user(self,user):
return Utilities.get_userprofile(user)
The method looks like this:
#staticmethod
def get_userprofile(user):
return UserProfile.objects.filter(user_auth__username=user)[0]
When I include this in the view, everything is fine. When I write a test case to use any of the methods inside the Utility class, I get None back:
Two test cases:
def test_stack_overflow(self):
a = ObjName()
print(a.get_user('admin'))
def test_Utility(self):
print(Utilities.get_user('admin'))
Results:
Creating test database for alias 'default'...
None
..None
.
----------------------------------------------------------------------
Can someone tell me why this is working in the view, but not working inside of the test case and does not generate any error messages?
Thanks
Verify whether your unit test comply the followings,
TestClass must be written in a file name test*.py
TestClass must have been subclassed from unittest.TestCase
TestClass should have a setUp function to create objects(usually done in this way, but objects creation can happen in the test functions as well) in the database
TestClass functions should start with test so as to be identified and run by the ./manage.py test command
TestClass may have tearDown to properly end the unit test case.
Test Case Execution process:
When you run ./manage.py test django sets up a test_your_database_name and creates all the objects mentioned in the setUp function(Usually) and starts executing the test functions in the order of placement inside the class and once when all the test functions are executed, finally looks for the tearDown function executes if any present in the test class and destroys the test database.
It may be because that you might not have invoked objects creation in setUp function or elsewhere in the TestClass.
Can you kindly post the entire traceback and test file to help you better?

Running pytest tests in another Python package

Right now, I have a Python package (let's call it mypackage) with a bunch of tests that I run with pytest. One particular feature can have many possible implementations, so I have used the funcarg mechanism to run these tests with a reference implementation.
# In mypackage/tests/conftest.py
def pytest_funcarg__Feature(request):
return mypackage.ReferenceImplementation
# In mypackage/tests/test_stuff.py
def test_something(Feature):
assert Feature(1).works
Now, I am creating a separate Python package with a fancier implementation (fancypackage). Is it possible to run all of the tests in mypackage that contain the Feature funcarg, only with different implementations?
I would like to avoid having to change fancypackage if I add new tests in mypackage, so explicit imports aren't ideal. I know that I can run all of the tests with pytest.main(), but since I have several implementations of my feature, I don't want to call pytest.main() multiple times. Ideally, it would look like something like this:
# In fancypackage/tests/test_impl1.py
def pytest_funcarg__Feature(request):
return fancypackage.Implementation1
## XXX: Do pytest collection on mypackage.tests, but don't run them
# In fancypackage/tests/test_impl2.py
def pytest_funcarg__Feature(request):
return fancypackage.Implementation2
## XXX: Do pytest collection on mypackage.tests, but don't run them
Then, when I run pytest in fancypackage, it would collect each of the mypackage.tests tests twice, once for each feature implementation. I have tried doing this with explicit imports, and it seems to work fine, but I don't want to explicitly import everything.
Bonus
An additional nice bonus would be to only collect those tests that contain the Feature funcarg. Is that possible?
Example with unittest
Before switching to py.test, I did this with the standard library's unittest. The function for that is the following:
def mypackage_test_suite(Feature):
loader = unittest.TestLoader()
suite = unittest.TestSuite()
mypackage_tests = loader.discover('mypackage.tests')
for test in all_testcases(mypackage_tests):
if hasattr(test, 'Feature'):
test.Feature = Feature
suite.addTest(test)
return suite
def all_testcases(test_suite_or_case):
try:
suite = iter(test_suite_or_case)
except TypeError:
yield test_suite_or_case
else:
for test in suite:
for subtest in all_testcases(test):
yield subtest
Obviously things are different now because we're dealing with test functions and classes instead of just classes, but it seems like there should be some equivalent in py.test that builds the test suite and allows you to iterate through it.
You could parameterise your Feature fixture:
#pytest.fixture(params=['ref', 'fancy'])
def Feature(request):
if request.param == 'ref':
return mypackage.ReferenceImplementation
else:
return fancypackage.Implementation1
Now if you run py.test it will test both.
Selecting tests on the fixture they use is not possible AFAIK, you could probably cobble something together using request.applymarker() and -m. however.

Python unittest with expensive setup

My test file is basically:
class Test(unittest.TestCase):
def testOk():
pass
if __name__ == "__main__":
expensiveSetup()
try:
unittest.main()
finally:
cleanUp()
However, I do wish to run my test through Netbeans testing tools, and to do that I need unittests that don't rely on an environment setup done in main. Looking at Caching result of setUp() using Python unittest - it recommends using Nose. However, I don't think Netbeans supports this. I didn't find any information indicating that it does. Additionally, I am the only one here actually writing tests, so I don't want to introduce additional dependencies for the other 2 developers unless they are needed.
How can I do the setup and cleanup once for all the tests in my TestSuite?
The expensive setup here is creating some files with dummy data, as well as setting up and tearing down a simple xml-rpc server. I also have 2 test classes, one testing locally and one testing all methods over xml-rpc.
If you use Python >= 2.7 (or unittest2 for Python >= 2.4 & <= 2.6), the best approach would be be to use
def setUpClass(cls):
# ...
setUpClass = classmethod(setUpClass)
to perform some initialization once for all tests belonging to the given class.
And to perform the cleanup, use:
#classmethod
def tearDownClass(cls):
# ...
See also the unittest standard library documentation on setUpClass and tearDownClass classmethods.
This is what I do:
class TestSearch(unittest.TestCase):
"""General Search tests for...."""
matcher = None
counter = 0
num_of_tests = None
def setUp(self): # pylint: disable-msg=C0103
"""Only instantiate the matcher once"""
if self.matcher is None:
self.__class__.matcher = Matcher()
self.__class__.num_of_tests = len(filter(self.isTestMethod, dir(self)))
self.__class__.counter = self.counter + 1
def tearDown(self): # pylint: disable-msg=C0103
"""And kill it when done"""
if self.counter == self.num_of_tests:
print 'KILL KILL KILL'
del self.__class__.matcher
Sadly (because I do want my tests to be independent and deterministic), I do this a lot (because system testing that take less than 5 minutes are also important).
First of all, what S. Lott said. However!, you do not want to do that. There is a reason setUp and tearDown are wrapped around each test: they help preserve the determinism of testing.
Otherwise, if some test places the system in a bad state, your next tests may fail. Ideally, each of your tests should be independent.
Also, if you insist on doing it this way, instead of writing by hand self.runTest1(), self.runTest2(), you might want to do a bit of introspection in order to find the methods to run.
Won't package-level initialization do it for you? From the Nose Wiki:
nose allows tests to be grouped into
test packages. This allows
package-level setup; for instance, if
you need to create a test database or
other data fixture for your tests, you
may create it in package setup and
remove it in package teardown once per
test run, rather than having to create
and tear it down once per test module
or test case.
To create package-level setup and
teardown methods, define setup and/or
teardown functions in the __init__.py
of a test package. Setup methods may
be named setup, setup_package, setUp,
or setUpPackage; teardown may be named
teardown, teardown_package, tearDown
or tearDownPackage. Execution of tests
in a test package begins as soon as
the first test module is loaded from
the test package.
You can save the state if expensiveSetup() is run or not.
__expensiveSetup_has_run = False
class ExpensiveSetupMixin(unittest.TestCase):
def setUp(self):
global __expensiveSetup_has_run
super(ExpensiveSetupMixin, self).setUp()
if __expensiveSetup_has_run is False:
expensiveSetup()
__expensiveSetup_has_run = True
Or some kind of variation of this. Maybe pinging xml-rpc server and create a new one if it isn't answering.
But the unit-testing way AFAIK is to setup and teardown per unittest even if it is expensive.
I know nothing about Netbeans, but I though I should mention zope.testrunner and it's support for a nifty thing: Layers. Basically, you do the testsetup in separate classes, and attach those classes to the tests. These classes can inherit from each other, forming a layer of setups. The testrunner will then only call each setup once, and saving the state of that in memory, and instead of setting up and tearing down, it will simply just copy the relevant layer context as a setup.
This speeds up test setup a lot, and is used when you test Zope products and Plone, where the testsetup often needs you to start a Plone CMS server, create a Plone site and add loads of content, a process that can take upwards half a minute. Doing that for each test method is obviously impossible, but with layers it is done only once. This shortens the test setup and protects the test methods from each other, and therefore means that the testing continues to be determenistic.
So I don't know of zope.testrunner will work for you, but it's worth a try.
shuld be possible to do it by defining startTestRun,stopTestRun of unittest.TestResult class. answer https://stackoverflow.com/a/64892396/2679740
You can assure setUp and tearDown execute once if you have only one test method, runTest. This method can do whatever else it wants. Just be sure you don't have any methods with names that start with test.
class MyExpensiveTest( unittest.TestCase ):
def setUp( self ):
self.resource = owThatHurts()
def tearDown( self ):
self.resource.flush()
self.resource.finish()
def runTest( self ):
self.runTest1()
self.tunTest2()
def runTest1( self ):
self.assertEquals(...)
def runTest2( self ):
self.assertEquals(...)
It doesn't automagically figure out what to run. If you add a test method, you also have to update runTest.

Categories