I'm trying to unit test a function with nested for loop using the Mock module in Python:
main.py:
#dataclass
Helper_class():
Attr1: Set[some_object]
Attr2: Set[str]
def fetch_rule_for(api, id):
return some rule_objects based on id
def my_func(list_ids):
api = create some api connection
result_dict = dict()
for id in list_ids:
Attr1_set = set()
Attr2_set = set()
for rule in fetch_rule_for(api, id):
if any(a.name == 'some_action_name' for a in rule.actions):
Attr2_set.add(rule.attr1)
Attr1_set.add(rule)
result_dict[id] = Helper_class(Attr1_set, Attr2_set)
return result_dict
Rule is a class object with method actions, action is another class object with attribute name.
Two questions I'm struggling with:
(1) How do I pacth the return value of function fetch_rule_for(api, id) where the return is a complicated class object?
(2) How do I deal with the for loops in Python using unit test? I have seen mock.call_count mentioned but can someone please get into more details or point me to relevant resources?
I'm new to unit testing in Python so any help is much appreicated!
If you want to patch the return value of a class just instantiate the class and manually set any necessary class variables then it's just a matter of using configure_mock(return_value=class_object)
Here is an example with requests.Response:
resp = Response()
resp.status_code = 200
resp._content = json.dumps({'token': 'xyz123'})
mock_post.configure_mock(return_value=resp)
In the case of the for loop you likely want to mock any returns contained within using side_effect
#mock.patch('your_class.fetch_rule_for', side_effect=iter([1,2,3]))
Related
I'm using python 3.9.2 with unittest and mock to patch out a class.
My code under test instantiates an object of the class and mock returns a MagicMock object as the instance.
My question is, can I access that object from my test code?
I can see the call that instantiates the class in the mock_calls list, but cannot find a way of accessing the instance that is returned from that call.
The reason I need to access the instance is that my code under test attaches attributes to the instance rather than call methods on it. It is easy to test method calls, but is there a direct way to test attributes?
Upon investigation I found that there was only a single instance of a MagicMock being created and returned each time I instantiated my class. This behaviour was not convenient for me due to the attributes that I add to the class.
I created the following test aid to support my needs. This is not general-purpose but could be adapted for other circumstances.
class MockMyClass():
"""mock multiple MyClass instances
Note - the code under test must add a name attribute to each instance
"""
def __init__(self):
self.myclass = []
def factory(self, /, *args, **kwargs):
"""return a new instance each time called"""
new = mock.MagicMock()
# override __enter__ to enable the with... context manager behaviour
# for convenience in testing
new.__enter__ = lambda x: new
self.myclass.append(new)
return new
def __getitem__(self, key: str) -> None:
"""emulate a dict by returning the named instance
use as
mockmyclass['name'].assert_called_once()
or
with mockmyclass['name'] as inst:
inst.start().assert_called_once()
"""
# Important - the code under test gives the instance a name
# attribute and this relies on that attribute so is not
# general purpose
wanted = [t for t in self.myclass if t.name == key]
if not wanted:
names = [t.name for t in self.myclass]
raise ValueError(f'no timer {key} in {names}')
return wanted[0]
class TestBehaviour(unittest.TestCase):
def setUp(self):
self.mockmyclass = MockMyClass()
self.mocked = mock.patch(
'path-to-my-file.MyClass',
side_effect=self.mockmyclass.factory,
)
self.addCleanup(self.mocked.stop)
self.mocked = self.mocked.start()
def test_something(self):
# call code under test
# then test with
with self.mockmyclass['name-of-instance'] as inst:
inst.start.assert_called_once()
inst.stop.assert_called_once()
# or test with
self.mockmyclass['name-of-instance'].start.assert_called_once()
need your insight:
In my own test setup (init_setup), I need to call another test that is already defined in class Test_Create_Tmp(). The issue is, this class has a fixture (init_api) that returns an array of function apis.
In init_setup: at line inv.test_post_inv_data(), i got 'method' object is not subscriptable, because inside it calls object's api by this: init_api["nAPI"].postJsonData(...)
How do I get this working, if I'm not allowed removing the fixture init_api() from that class?
I know I can get it working, by complete get rid fixture init_api, move its code just inside test_post_inv_data().
Thanks!
My own setup:
#pytest.fixture(scope="class")
def init_setup(self, read_data):
#import Test_Create_Tmp class here
inv = Test_Create_Tmp()
inv.test_post_inv_data(read_data, inv.init_api)
# this class is defined in another file
class Test_Create_Tmp():
#pytest.fixture
def init_api(self, client):
self.nAPI = NAPI(client) #NAPI is a class
self.sAPI = SApi(client) #SApi is another class
return {"nAPI": self.nAPI, "sAPI": self.sAPI}
def test_post_inv_data(self, read_data, init_api):
...
init_api["nAPI"].postJsonData(json.dumps(data))
I figured out myself. Just need to create the needed objects (ie, nAPI, sAPI) to invoke the call:
inv = Test_Create_Tmp()
init_api = {
'nAPI': NAPI(client), #create object of NAPI
'sAPI': SApi(client) #create object of SApi
}
inv.test_post_inv_data(read_data, init_api)
I am new to Python, so I apologize if this is a duplicate or overly simple question. I have written a coordinator class that calls two other classes that use the kafka-python library to send/read data from Kafka. I want to write a unit test for my coordinator class but I'm having trouble figuring out how to best to go about this. I was hoping that I could make an alternate constructor that I could pass my mocked objects into, but this doesn't seem to be working as I get an error that test_mycoordinator cannot be resolved. Am I going about testing this class the wrong way? Is there a pythonic way I should be testing it?
Here is what my test class looks like so far:
import unittest
from mock import Mock
from mypackage import mycoordinator
class MyTest(unittest.TestCase):
def setUpModule(self):
# Create a mock producer
producer_attributes = ['__init__', 'run', 'stop']
mock_producer = Mock(name='Producer', spec=producer_attributes)
# Create a mock consumer
consumer_attributes = ['__init__', 'run', 'stop']
data_out = [{u'dataObjectID': u'test1'},
{u'dataObjectID': u'test2'},
{u'dataObjectID': u'test3'}]
mock_consumer = Mock(
name='Consumer', spec=consumer_attributes, return_value=data_out)
self.coor = mycoordinator.test_mycoordinator(mock_producer, mock_consumer)
def test_send_data(self):
# Create some data and send it to the producer
count = 0
while count < 3:
count += 1
testName = 'test' + str(count)
self.coor.sendData(testName , None)
And here is the class I am trying to test:
class MyCoordinator():
def __init__(self):
# Process Command Line Arguments using argparse
...
# Initialize the producer and the consumer
self.myproducer = producer.Producer(self.servers,
self.producer_topic_name)
self.myconsumer = consumer.Consumer(self.servers,
self.consumer_topic_name)
# Constructor used for testing -- DOES NOT WORK
#classmethod
def test_mycoordinator(cls, mock_producer, mock_consumer):
cls.myproducer = mock_producer
cls.myconsumer = mock_consumer
# Send the data to the producer
def sendData(self, data, key):
self.myproducer.run(data, key)
# Receive data from the consumer
def getData(self):
data = self.myconsumer.run()
return data
There is no need to provide a separate constructor. Mocking patches your code to replace objects with mocks. Just use the mock.patch() decorator on your test methods; it'll pass in references to the generated mock objects.
Both producer.Producer() and consumer.Consumer() are then mocked out before you create the instance:
import mock
class MyTest(unittest.TestCase):
#mock.patch('producer.Producer', autospec=True)
#mock.patch('consumer.Consumer', autospec=True)
def test_send_data(self, mock_consumer, mock_producer):
# configure the consumer instance run method
consumer_instance = mock_consumer.return_value
consumer_instance.run.return_value = [
{u'dataObjectID': u'test1'},
{u'dataObjectID': u'test2'},
{u'dataObjectID': u'test3'}]
coor = MyCoordinator()
# Create some data and send it to the producer
for count in range(3):
coor.sendData('test{}'.format(count) , None)
# Now verify that the mocks have been called correctly
mock_producer.assert_has_calls([
mock.Call('test1', None),
mock.Call('test2', None),
mock.Call('test3', None)])
So the moment test_send_data is called, the mock.patch() code replaces the producer.Producer reference with a mock object. Your MyCoordinator class then uses those mock objects rather than the real code. calling producer.Producer() returns a new mock object (the same object that mock_producer.return_value references), etc.
I've made the assumption that producer and consumer are top-level module names. If they are not, provide the full import path. From the mock.patch() documentation:
target should be a string in the form 'package.module.ClassName'. The target is imported and the specified object replaced with the new object, so the target must be importable from the environment you are calling patch() from. The target is imported when the decorated function is executed, not at decoration time.
I'm attempting to create a few unit tests for my class. I want to mock these, so that I don't burn through my API quota running some of these tests. I have multiple test cases that will call the fetch method, and depending on the passed URL I'll get different results back.
My example class looks like this:
import requests
class ExampleAPI(object):
def fetch(self, url, params=None, key=None, token=None, **kwargs):
return requests.get(url).json() # Returns a JSON string
The tutorial I'm looking at shows that I can do something like this:
import unittest
from mock import patch
def fake_fetch_test_one(url):
...
class TestExampleAPI(unittest.TestCase):
#patch('mymodule.ExampleAPI.fetch', fake_fetch_test_one)
def test_fetch(self):
e = ExampleAPI()
self.assertEqual(e.fetch('http://my.api.url.example.com'), """{'result': 'True'}""")
When I do this, though, I get an error that says:
TypeError: fake_fetch_test_one() takes exactly 1 argument (3 given)
What is the proper way to mock a requests.get call that is in a method in my class? I'll need the ability to change the mock'd response per test, because different URLs can provide different response types.
Your fake fetch needs to accept the same arguments as the original:
def fake_fetch(self, url, params=None, key=None, token=None, **kwargs):
Note that it's better to mock just the external interface, which means letting fetch call requests.get (or at least, what it thinks is requests.get):
#patch('mymodule.requests.get')
def test_fetch(self, fake_get):
# It would probably be better to just construct
# a valid fake response object whose `json` method
# would return the right thing, but this is a easier
# for demonstration purposes. I'm assuming nothing else
# is done with the response.
expected = {"result": "True"}
fake_get.return_value.json.return_value = expected
e = ExampleAPI()
self.assertEqual(e.fetch('http://my.api.url.example.com'), expected)
from you test method you can monkeypatch your requests module
import unittest
class Mock:
pass
ExampleAPI.requests = Mock()
def fake_get_test_one(url):
/*returns fake get json */
ExampleAPI.requests.get= Mock()
ExampleAPI.requests.json = fake_get_test_one
class TestExampleAPI(unittest.TestCase):
def test_fetch(self):
e = ExampleAPI()
self.assertEqual(e.fetch('http://my.api.url.example.com'), """{'result': 'True'}""")
you can setup the patch in each setup() and corresponding teardown() methods of your test class if needed
I have a class which makes requests to a remote API. I'd like to be able to reduce the number of calls I'm making. Some of the methods in my class make the same API calls (but for different reasons), so I'ld like the ability for them to 'share' a cached API response.
I'm not entirely sure if it's more Pythonic to use optional parameters or to use multiple methods, as the methods have some required parameters if they are making an API call.
Here are the approches as I see them, which do you think is best?
class A:
def a_method( item_id, cached_item_api_response = None):
""" Seems awkward having to supplied item_id even
if cached_item_api_response is given
"""
api_response = None
if cached_item_api_response:
api_response = cached_item_api_response
else:
api_response = ... # make api call using item_id
... #do stuff
Or this:
class B:
def a_method(item_id = None, cached_api_response = None):
""" Seems awkward as it makes no sense NOT to supply EITHER
item_id or cached_api_response
"""
api_response = None
if cached_item_api_response:
api_response = cached_item_api_response
elif item_id:
api_response = ... # make api call using item_id
else:
#ERROR
... #do stuff
Or is this more appropriate?
class C:
"""Seems even more awkward to have different method calls"""
def a_method(item_id):
api_response = ... # make api call using item_id
api_response_logic(api_response)
def b_method(cached_api_response):
api_response_logic(cached_api_response)
def api_response_logic(api_response):
... # do stuff
Normally when writing method one could argue that a method / object should do one thing and it should do it well. If your method get more and more parameters which require more and more ifs in your code that probably means that your code is doing more then one thing. Especially if those parameters trigger totally different behavior. Instead maybe the same behavior could be produced by having different classes and having them overload methods.
Maybe you could use something like:
class BaseClass(object):
def a_method(self, item_id):
response = lookup_response(item_id)
return response
class CachingClass(BaseClass):
def a_method(self, item_id):
if item_id in cache:
return item_from_cache
return super(CachingClass, self).a_method(item_id)
def uncached_method(self, item_id)
return super(CachingClass, self).a_method(item_id)
That way you can split the logic of how to lookup the response and the caching while also making it flexible for the user of the API to decide if they want the caching capabilities or not.
There is nothing wrong with the method used in your class B. To make it more obvious at a glance that you actually need to include either item_id or cached_api_response, I would put the error checking first:
class B:
def a_method(item_id = None, cached_api_response = None):
"""Requires either item_id or cached_api_response"""
if not ((item_id == None) ^ (cached_api_response == None)):
#error
# or, if you want to allow both,
if (item_id == None) and (cached_api_response == None):
# error
# you don't actually have to do this on one line
# also don't use it if cached_item_api_response can evaluate to 'False'
api_response = cached_item_api_response or # make api call using item_id
... #do stuff
Ultimately this is a judgement that must be made for each situation. I would ask myself, which of these two more closely fits:
Two completely different algorithms or actions, with completely different semantics, even though they may be passed similar information
A single conceptual idea, with consistent semantics, but with nuance based on input
If the first is closest, go with separate methods. If the second is closest, go with optional arguments. You might even implement a single method by testing the type of the argument(s) to avoid passing additional arguments.
This is an OO anti-pattern.
class API_Connection(object):
def do_something_with_api_response(self, response):
...
def do_something_else_with_api_response(self, response):
...
You have two methods on an instance and you're passing state between them explicitly? Why are these methods and not bare functions in a module?
Instead, think about using encapsulation to help you by having the instance of the class own the api response.
For example:
class API_Connection(object):
def __init__(self, api_url):
self._url = api_url
self.cached_response = None
#property
def response(self):
"""Actually use the _url and get the response when needed."""
if self._cached_response is None:
# actually calculate self._cached_response by making our
# remote call, etc
self._cached_response = self._get_api_response(self._url)
return self._cached_response
def _get_api_response(self, api_param1, ...):
"""Make the request and return the api's response"""
def do_something_with_api_response(self):
# just use self.response
do_something(self.response)
def do_something_else_with_api_response(self):
# just use self.response
do_something_else(self.response)
You have caching and any method which needs this response can run in any order without making multiple api requests because the first method that needs self.response will calculate it and every other will use the cached value. Hopefully it's easy to imagine extending this with multiple URLs or RPC calls. If you have a need for a lot of methods that cache their return values like response above then you should look into a memoization decorator for your methods.
The cached response should be saved in the instance, not passed around like a bag of Skittles -- what if you dropped it?
Is item_id unique per instance, or can an instance make queries for more than one? If it can have more than one, I'd go with something like this:
class A(object):
def __init__(self):
self._cache = dict()
def a_method( item_id ):
"""Gets api_reponse from cache (cache may have to get a current response).
"""
api_response = self._get_cached_response( item_id )
... #do stuff
def b_method( item_id ):
"""'nother method (just for show)
"""
api_response = self._get_cached_response( item_id )
... #do other stuff
def _get_cached_response( self, item_id ):
if item_id in self._cache:
return self._cache[ item_id ]
response = self._cache[ item_id ] = api_call( item_id, ... )
return response
def refresh_response( item_id ):
if item_id in self._cache:
del self._cache[ item_id ]
self._get_cached_response( item_id )
And if you may have to get the most current info about item_id, you can have a refresh_response method.