I am having lambdas which use boto3.client() to connect to a dynamoDB.
I tried to test it like this
#mock.patch("boto3.client")
def test(self, mock_client, test):
handler(event, context)
print(mock_client.call_count) # 1
print(mock_client.put_item.call_count) # 0
However, the mock_client.call_count is 1, but not the put_item_call_count.
My handler looks like this:
def handler(event, context):
dynamodb = boto3.client('dynamodb')
response = dynamodb.put_item(// same attributed)
Any suggestion, how to test if the correct item gots putted in the database without using moto?
I believe you're very close, there's just one tiny problem.
When your mocked boto3.client is called, it returns another mock and you want to evaluate that mocks call_count. By accessing the return_value of the original mock, you get access to the created magic mock.
#mock.patch("boto3.client")
def test(self, mock_client, test):
handler(event, context)
print(mock_client.call_count)
# .return_value refers to the magic mock that's
# created when boto3.client is called
print(mock_client.return_value.put_item.call_count)
What you're currently evaluating is the call count of boto3.client.put_item and not boto3.client("dynamodb").put_item().
Related
I am writing tests for my azure function, and for some reason - I can't mock a function call. I should also mention this is the first time I'm writing a python test case so be nice :)
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
try:
req_body = req.get_json()
except ValueError as error:
logging.info(error)
download_excel(req_body)
return func.HttpResponse(
"This HTTP triggered function executed successfully.",
status_code=200
)
so thats the initial function. This function calls download_excel and pass the request body. The next function receives the request body, and writes that excel to a blob storage.
def download_excel(request_body: Any):
excel_file = request_body["items_excel"]
#initiate the blob storage client
blob_service_client = BlobServiceClient.from_connection_string(os.environ["AzureWebJobsStorage"])
container = blob_service_client.get_container_client(CONTAINER_NAME)
blob_path = "excel-path/items.xlsx"
blob_client = container.get_blob_client(blob_path)
blob_client.upload_blob_from_url(excel_file)
Those are the two functions. receive a file, save it to blob storage, but i can't mock the download_excel call in the main function. I've tried using mock, patch, went through all sorts of links, and i just can't find a way to achieve this. Any help would be appreciated. here is what i currently have in the test file.
class TestFunction(unittest.TestCase):
##patch('download_excel')
def get_excel_files_main(self):
"""Test main function"""
req = Mock()
resp = main(req)
# download_excel= MagicMock()
self.assertEqual(resp.status_code, 200)
commenting out the function call in the function and in the test makes the test pass, but i need to know how to mock the download_excel call. I'm still going to write a test case for the download_excel function, but will cross that bridge when i get to it.
Figured it out. I'm pretty silly. Main issue was in an azure function, I figured since there was no class I can ignore every example in the doc that had to do with classes.
The trick is to use the function name as a class. so say you have function name - http_trigger, and a init.py file within that function folder. Within that init file - you have your main method, and a second method thats called from the main method - you can use MagicMock.
import function_name
def test_main_function(self):
"""Testing main function"""
function_name.second_method_being_called = MagicMock()
Thats it. Thats how you mock it! *facepalm
I'm trying to write a test for a function that uses a class as a dependency and calls this class method(s).
Let's assume the function is
def store_username_and_password(**kwargs) -> Tuple[str, StorageResult]:
storage = MyDependency(param1, param2)
try:
storage.read_data(mountpoint, path)
except InvalidPathException:
storage.write_data(data, mountpoint, path)
return (f"Stored successfully {some_params}", StorageResult(some_params))
In the test I'm trying to patch MyDependency like this:
input = {....}
with patch("my.application.namespace.MyDependency") as mock_storage:
mock_storage.read_data.side_effect = InvalidPathException("the data does not exist yet")
with raises(InvalidPathException) as e:
store_username_and_password(**input)
However, when I debug it and step inside the function call from the test above and proceed to the storage.read_data(mountpoint, path) call, I see in the debugger that there is no side_effect set. So it never raises that exception I want on read_data call.
See below:
Anthony's comment helped me but I thought I'd add an explanation...
mock_storage.read_data.side_effect will work, but only if you call MyDependency.read_data(). What happens is the call to MyDependency(param1, param2) creates a new MagicMock that doesn't have the side_effect set on it. The new mock can be accessed with mock_storage.return_value.
So, if you have:
storage = MyDependency(param1, param2)
storage.read_data(mountpoint, path)
Then to mock that method you need:
with patch("my.application.namespace.MyDependency") as mock_storage:
mock_storage.return_value.read_data.side_effect = Exception
I have two scripts which I need to deploy in AWS lambda, I have never done it before, from the documentation I created kind of a few steps which would summarize the flow:
Create a lambda function
Install boto3
Use invoke function
Lets say I have a simple function:
def first_function():
return print('First function')
When I go to AWS -> Lambda -> Functions -> Create function I get to the configuration part where in the editor I see this:
import json
def lambda_handler(event, context):
# TODO implement
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
Is this how I should edit this to deploy my function:
import json
def lambda_handler(event, context):
# TODO implement
return {
def first_function():
return print('First function')
first_function()
}
Tha lambda_handler that shows up when you create a function in the console is simply boiler plate code.
You can name your handler anything, or simply place your function code under lambda_handler
def lambda_handler(event, context):
return print('First function')
The name lambda_handler is configurable, meaning you could use the code
def first_function(event, context):
return print('First function')
But you'll need to ensure that the function is configured to use first_function as it's handler.
I'd recommend reading through the docs specific to python handlers
Whatever functionality you need to implement in your lambda, you should write within the lambda_handler. If you want to refer to other smaller function you can define it outside the lambda handler function and can refer to it in the handler. So it might be like below
import x
def functiona():
print(‘something’)
def functionb():
print(‘somethingelse’)
def lambda_handler(event,context)
print(‘lambda entry point)
functiona()
functionb()
Since the module will first be imported, you can still write code outside of functions although it is usually not a good practice since you cannot access the context and parameters you have sent to lambda.
I'm attempting to create a few unit tests for my class. I want to mock these, so that I don't burn through my API quota running some of these tests. I have multiple test cases that will call the fetch method, and depending on the passed URL I'll get different results back.
My example class looks like this:
import requests
class ExampleAPI(object):
def fetch(self, url, params=None, key=None, token=None, **kwargs):
return requests.get(url).json() # Returns a JSON string
The tutorial I'm looking at shows that I can do something like this:
import unittest
from mock import patch
def fake_fetch_test_one(url):
...
class TestExampleAPI(unittest.TestCase):
#patch('mymodule.ExampleAPI.fetch', fake_fetch_test_one)
def test_fetch(self):
e = ExampleAPI()
self.assertEqual(e.fetch('http://my.api.url.example.com'), """{'result': 'True'}""")
When I do this, though, I get an error that says:
TypeError: fake_fetch_test_one() takes exactly 1 argument (3 given)
What is the proper way to mock a requests.get call that is in a method in my class? I'll need the ability to change the mock'd response per test, because different URLs can provide different response types.
Your fake fetch needs to accept the same arguments as the original:
def fake_fetch(self, url, params=None, key=None, token=None, **kwargs):
Note that it's better to mock just the external interface, which means letting fetch call requests.get (or at least, what it thinks is requests.get):
#patch('mymodule.requests.get')
def test_fetch(self, fake_get):
# It would probably be better to just construct
# a valid fake response object whose `json` method
# would return the right thing, but this is a easier
# for demonstration purposes. I'm assuming nothing else
# is done with the response.
expected = {"result": "True"}
fake_get.return_value.json.return_value = expected
e = ExampleAPI()
self.assertEqual(e.fetch('http://my.api.url.example.com'), expected)
from you test method you can monkeypatch your requests module
import unittest
class Mock:
pass
ExampleAPI.requests = Mock()
def fake_get_test_one(url):
/*returns fake get json */
ExampleAPI.requests.get= Mock()
ExampleAPI.requests.json = fake_get_test_one
class TestExampleAPI(unittest.TestCase):
def test_fetch(self):
e = ExampleAPI()
self.assertEqual(e.fetch('http://my.api.url.example.com'), """{'result': 'True'}""")
you can setup the patch in each setup() and corresponding teardown() methods of your test class if needed
My webapp wants to send a message to AWS SQS with boto and I'd want to mock out sending the actual message and just checking that calling send_message is called. However I do not understand how to use python mock to patch a function that a function being tested calls.
How could I achieve mocking out boto con.send_message as in the pseudo-like code below?
views.py:
#app.route('/test')
def send_msg():
con = boto.sqs.connect_to_region("eu-west-1",aws_access_key_id="asd",aws_secret_access_key="asd")
que = con.get_queue('my_queue')
msg = json.dumps({'data':'asd'})
r=con.send_message(que, msg)
tests.py
class MyTestCase(unittest.TestCase):
def test_test(self):
with patch('views.con.send_message') as sqs_send:
self.test_client.get('/test')
assert(sqs_send.called)
To do this kind of test you need patch connect_to_region(). When this method is patched return a MagicMock() object that you can use to test all your function behavior.
Your test case can be something like this one:
class MyTestCase(unittest.TestCase):
#patch("boto.sqs.connect_to_region", autospec=True)
def test_test(self, mock_connect_to_region):
#grab the mocked connection returned by patched connect_to_region
mock_con = mock_connect_to_region.return_value
#call client
self.test_client.get('/test')
#test connect_to_region call
mock_connect_to_region.assert_called_with("eu-west-1",aws_access_key_id="asd",aws_secret_access_key="asd")
#test get_queue()
mock_con.get_queue.assert_called_with('my_queue')
#finaly test send_message
mock_con.send_message.assert_called_with(mock_con.get_queue.return_value, json.dumps({'data':'asd'}))
Just some notes:
I wrote it in a white box style and check all calls of your view: you can do it more loose and omit some checks; use self.assertTrue(mock_con.send_message.called) if you want just check the call or use mock.ANY as argument if you are not interested in some argument content.
autospec=True is not mandatory but very useful: take a look at autospeccing.
I apologize if code contains some error... I cannot test it now but I hope the idea is clear enough.