pytest mock function but verify parameters - python

I have a method with an external API call.
The external API call that takes two parameters (a XML string and number) and if it is successful it creates something in the external system.
def create_stuff(my_number: int,name: str):
try:
new_value = my_number + 5
stuff_xml = '''<?xml version='1.0' encoding='UTF-8'?>
<description>''' + name + '''</description>
<properties/>'''
external_api_call.create_stuff(name, new_value)
return {"status": 'success', "message": "created stuff '" + name}
except Exception as e:
return {"status": 'failure', "message": "error while creating stuff: " + str(e)}
Because I can't reach the external system from the pytest environment and really don't want to create something there just for testing purposes, I decided to mock the external call.
def test_create_stuff(mocker):
mocker.patch(
'path.to.class.external_api_call.create_stuff',
return_value=5
)
assert actual['status'] == 'success'
In the function the external call handles the verification of the parameters (i.e. is the XML structure valid), but if I mock the external call I can't validate if the rest of the function works as designed.
Is there a way to grab the parameters of the external call and validate them?
For example like this:
def test_create_folder_success(credentials_valid,status_succeded, jenkins_extension_mocked):
my_mocker = mocker.patch(
'path.to.class.external_api_call.create_stuff',
return_value=5
)
actual = my_class.create_stuff(my_number=5,name="foo")
xml = xml.etree.ElementTree.parse(my_mocker.name)
value = my_mocker.new_value
assert actual['status'] == 'success'
assert value == 10
assert validate_xml(xml) == True

thank you jonrsharpe and sry for the late response!
with your help I was able to get it woking:
def test_create_folder_success(credentials_valid,status_succeded):
with mock.patch('path.to.class.external_api_call.create_stuff') as my_mocker:
actual = my_class.create_stuff(my_number=5,name="foo")
my_args = my_mocker.call_args
xml = xml.etree.ElementTree.parse(my_args[1])
assert validate_xml(xml) == True
value = my_mocker.new_value
assert actual['status'] == 'success'
assert value == 10

Related

Testing a function that validates name

How do I write multiple test cases for a single function. I want the functions to essentially iterate through a bunch of tests and then return a boolean or answer saying that the function has passed these tests.
def test_valid01():
name = 'jack'
expected = True
assert is_valid_name(name) == expected, "is_valid_name has failed."
test1 = True
this is an example of one of my functions testing another function.
A way that you could test various tests for the is_valid_name() function:
def assert_test_valid(name, expected_value):
assert is_valid_name(name) == expected_value, f"is_valid_name has failed for {name}" # allows identification of which test failed
def test_valid_all():
test_responses = { # the input and the expected response
'jack': True,
}
for key, value in test_responses.items(): # to go through all of the responses that you would like to test
assert_test_valid(key, value)

Apply a function with target function args in advance and get returned results to be applied on that target function

I've updated this post now with actual scenario that I'm trying to solve. I'm using Flask to build RESTful api with flask-restful plugin and pika library to work with RabbitMQ. For example, this snippet of code illustrates a RPC request for authorization using info about user permission and bearer token.
def request_authz(self, metadata):
"""
For an RPC request, the RCP client sends a message with two
properties (i.e. `reply_to` and `correlation_id`) set to its
callback queue
"""
self.response = None
self.correlation_id = str(uuid.uuid4())
props = pika.BasicProperties(reply_to=self.callback_queue,
correlation_id=self.correlation_id)
self.channel.basic_publish(exchange=AMQP_EXCHANGE,
routing_key=BINDING_STATISTICS,
body=json.loads(s=metadata),
properties=props)
while self.response is None:
self.conn_broker.process_data_events()
return self.response
And this snippet of code illustrates a POST API endpoint for accessing to resources after authorization is approved while has not been implemented yet. That metadata should be params acquired from this request.
def post(self):
items = info(args=request.get_json()['params'])
return {'data': items}
I want to make sure that all api should be authorized via request_authz. In other words, all api should get response returned by request_authz that indicates if it is authorized or not. How to do this in pythonic way (e.g. using decorator) ?
You could report the prints to a single function:
def util(value):
if value > 0:
return True
else:
return False
def check_value(value, value_type):
if util(value):
print('%s is %s' % (value_type, str(value)))
else:
print('%s must be positive' % value_type)
def func1(value):
check_value(value, 'perimeter')
def func2(value):
check_value(value, 'area')
Moreover, the util function can be reduced to:
def util(value):
return value > 0
The util function could be reduced to
def util (value):
return value > 0
The return will be the same. In the original you say
if value > 0:#If condition is true return True, if not return False
return True
else:
return False
Its easier to just return the camparation result

How to call python api method inside another api in same class?

I have the following api method
#app.route('/api/v1/lessons', methods=['GET'])
def api_lessons():
if 'courseId' in request.args:
courseId = request.args['courseId']
else:
return "Error: No course id provided. Please specify an course id."
onto = get_ontology("ontology.owl")
onto.load()
result = onto[courseId].contains
result2 = []
for i in result:
temp = "{ id : " + str(i.Identifier) + ", name : " + str(i.Name) + "}"
print(temp)
result2.append(temp)
return json.dumps(result2)
And I need to add a new method to call this api internally with same args
#app.route('/api/v1/learningPath', methods=['GET'])
def api_learningPath():
lessons = api_lessons
return json.dumps(result2)
How to do that ?
You need to call the function instead of calling an internal API. Your api_lessons() will return a JSON string and you will need to parse it back to JSON in order to use it. Your function would be like this.
#app.route('/api/v1/learningPath', methods=['GET'])
def api_learningPath():
lessons = json.loads(api_lessons())
return json.dumps(result2)

python class method mocking failure

Trying to understand mocking/patching and I have a restful API project with three files (FYI, I'm using flask)
class1.py
domain.py
test_domain.py
class1.py file content:
class one:
def addition(self):
return 4+5
domain.py file content:
from class1 import one
class DomainClass(Resource):
def post(self):
test1 = one()
val = test1.addition()
return {'test' : val }
test_domain.py file content:
import my_app
from flask_api import status
from mock import patch
app = my_app.app.test_client()
def test_post():
with patch('domain.one') as mock:
instance = mock.return_value
instance.addition.return_value = 'yello'
url = '/domain'
response = app.post(url)
print response.data
assert status.HTTP_200_OK == response.status_code
assert mock.called
For my test_domain.py file, I've also tried this...
#patch('domain.one')
def test_post(mock_domain):
mock_domain.addition.return_value = 1
url = '/domain'
response = app.post(url)
print response.data
assert status.HTTP_200_OK == response.status_code
My assert for the status of 200 passes, however, the problem is that I'm not able to mock or patch the addition method to give me value of 1 in place of 9 (4+5). I also tried doing 'assert mock.called' and it failes as well. I know I should be mocking/patching where the 'one()' method is used, i.e. in domain.py not in class1.py. But I tried even mocking class1.one in place of domain.one and I still kept getting 9 and not 1. What am I doing wrong ?
******** Update
I've another dilemma on the same issue, I tried doing this in the test_domain file instead of patching....
from common.class1 import one
def test_post():
one.addition = MagicMock(return_value=40)
url = '/domain'
response = app.post(url)
print response.data
assert status.HTTP_200_OK == response.status_code
Question
In update above, I did not do a mock at the place where it is used (i.e.: domain.one.addition = MagicMock(...) and it still worked !!!! It seems it may be doing a global change. Why did this work ?
In the above example, 'one' is a class in the module class1.py. If I change this class 'one' to a function in class1.py, mocking does not work. It seems this function 'one' residing in module class1.py can not be mocked like this...one.return_value = 'xyz', why? Can it be mocked globally ?
There are some issues in your code. In the first example you forgot that patch() is applied in with context and the original code is recovered when the context end. Follow code should work:
def test_post():
with patch('domain.one') as mock:
instance = mock.return_value
instance.addition.return_value = 'yello'
url = '/domain'
response = app.post(url)
print response.data
assert status.HTTP_200_OK == response.status_code
assert mock.called
assert response.data['test'] == 'yello'
The second one have an other issue: if you want patch just addition method you should use:
#patch('domain.one.addition')
def test_post(mock_addition):
mock_addition.return_value = 1
...
assert mock_addition.called
assert response.data['test'] == 1
If you want mock all one class you should set the return value of addition method of mock instance returned by mock_domain call like in your first example:
#patch('domain.one')
def test_post(mock_domain):
mock_addition = mock_domain.return_value.addition
mock_addition.return_value = 1
...
assert mock_addition.called
assert response.data['test'] == 1

How can I determine if a test passed or failed by examining the Item object passed to the pytest_runtest_teardown?

Pytest allows you to hook into the teardown phase for each test by implementing a function called pytest_runtest_teardown in a plugin:
def pytest_runtest_teardown(item, nextitem):
pass
Is there an attribute or method on item that I can use to determine whether the test that just finished running passed or failed? I couldn't find any documentation for pytest.Item and hunting through the source code and playing around in ipdb didn't reveal anything obvious.
You may also consider call.excinfo in pytest_runtest_makereport:
def pytest_runtest_makereport(item, call):
if call.when == 'setup':
print('Called after setup for test case is executed.')
if call.when == 'call':
print('Called after test case is executed.')
print('-->{}<--'.format(call.excinfo))
if call.when == 'teardown':
print('Called after teardown for test case is executed.')
The call object contains a whole bunch of additional information (test start time, stop time, etc.).
Refer:
http://doc.pytest.org/en/latest/_modules/_pytest/runner.html
def pytest_runtest_makereport(item, call):
when = call.when
duration = call.stop-call.start
keywords = dict([(x,1) for x in item.keywords])
excinfo = call.excinfo
sections = []
if not call.excinfo:
outcome = "passed"
longrepr = None
else:
if not isinstance(excinfo, ExceptionInfo):
outcome = "failed"
longrepr = excinfo
elif excinfo.errisinstance(pytest.skip.Exception):
outcome = "skipped"
r = excinfo._getreprcrash()
longrepr = (str(r.path), r.lineno, r.message)
else:
outcome = "failed"
if call.when == "call":
longrepr = item.repr_failure(excinfo)
else: # exception in setup or teardown
longrepr = item._repr_failure_py(excinfo,
style=item.config.option.tbstyle)
for rwhen, key, content in item._report_sections:
sections.append(("Captured %s %s" %(key, rwhen), content))
return TestReport(item.nodeid, item.location,
keywords, outcome, longrepr, when,
sections, duration)
The Node class don't have any information regarding the status of the last test, however we do have the status of the total number of failed tests (in item.session.testsfailed), and we can use it:
We can add a new member to the item.session object (not so nice, but you gotta love python!). This member will save the status of the last testsfailed - item.session.last_testsfailed_status.
If testsfailed > last_testsfailed_status - the last test the run just failed.
import pytest
import logging
logging.basicConfig(
level='INFO',
handlers=(
logging.StreamHandler(),
logging.FileHandler('log.txt')
)
)
#pytest.mark.hookwrapper
def pytest_runtest_teardown(item, nextitem):
outcome = yield
if not hasattr(item.session, 'last_testsfailed_status'):
item.session.last_testsfailed_status = 0
if item.session.testsfailed and item.session.testsfailed > item.session.last_testsfailed_status:
logging.info('Last test failed')
item.session.last_testsfailed_status = item.session.testsfailed
Initially, I was also struggling to get the Test Status, so that I can use it to make a custom report.
But, after further analysis of pytest_runtest_makereport hook function, I was able to see various attributes of 3 params (item, call, and report).
Let me just list some of it out:
Call:
excinfo (this further drills down to carry traceback if any)
start (start time of the test in float value since epoch time)
stop (stop time of the test in float value since epoch time)
when (can take values - setup, call, teardown)
item:
_fixtureinfo (contains info abt any fixtures you have used)
nodeid (the test_name assumed by pytest)
cls (contains the class info of test, by info I mean the variables which were declared and accessed in the class of test)
funcargs (what parameters you have passed to your test along with its values)
report:
outcome (this carries the test status)
longrepr (contains the failure info including the traceback)
when (can take values - setup, call, teardown. please note that depending on its value the report will carry the values)
FYI: there are other attributes for all the above 3 params, I have mentioned in few.
Below is the code snipped depicting, of how I have hooked the function and used.
def pytest_runtest_makereport(item, call, __multicall__):
report = __multicall__.execute()
if (call.when == "call") and hasattr(item, '_failed_expect'):
report.outcome = "failed"
summary = 'Failed Expectations:%s' % len(item._failed_expect)
item._failed_expect.append(summary)
report.longrepr = str(report.longrepr) + '\n' + ('\n'.join(item._failed_expect))
if call.when == "call":
ExTest.name = item.nodeid
func_args = item.funcargs
ExTest.parameters_used = dict((k, v) for k, v in func_args.items() if v and not hasattr(v, '__dict__'))
# [(k, v) for k, v in func_args.items() if v and not hasattr(v, '__dict__')]
t = datetime.fromtimestamp(call.start)
ExTest.start_timestamp = t.strftime('%Y-%m-%d::%I:%M:%S %p')
ExTest.test_status = report.outcome
# TODO Get traceback info (call.excinfo.traceback)
return report
Hook wrappers are the way to go - allow all the default hooks to run & then look at their results.
Below example shows 2 methods for detecting whether a test has failed (add it to your conftest.py)
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
# Because this is a hookwrapper, calling `yield` lets the actual hooks run & returns a `_Result`
result = yield
# Get the actual `TestReport` which the hook(s) returned, having done the hard work for you
report = result.get_result()
# Method 1: `report.longrepr` is either None or a failure representation
if report.longrepr:
logging.error('FAILED: %s', report.longrepr)
else:
logging.info('Did not fail...')
# Method 2: `report.outcome` is always one of ['passed', 'failed', 'skipped']
if report.outcome == 'failed':
logging.error('FAILED: %s', report.longrepr)
elif report.outcome == 'skipped':
logging.info('Skipped')
else: # report.outcome == 'passed'
logging.info('Passed')
See TestReport documentation for details of longrepr and outcome
(It doesn't use pytest_runtest_teardown as the OP requested but it does easily let you check for failure)

Categories